Mapbox Distance Matrix API returns all zeroes - http

I'm trying to implement the Mapbox Distance Matrix API as an alternative to Google, since it was becoming too expensive. I've tried to reduce the example to something minimal, with only two values:
{
code: "Ok",
distances: [
[
0,
0
]
],
durations: [
[
0,
0,
0,
0,
0,
0
]
],
destinations: [
{
distance: 404951.186070298,
name: "",
location: [
48.761423,
5.731594
]
},
{
distance: 402983.402982556,
name: "",
location: [
48.761423,
5.731594
]
}
],
sources: [
{
distance: 401905.604376238,
name: "",
location: [
48.761423,
5.731594
]
}
]
}
I see that the coordinates of the values are the same, even though they do not match the input coordinates from my URL, which are 52.08515,4.2826;52.11703,4.28716;52.11736,4.28939. The problem persists with all modes of transportation. Any help would be appreciated!

The format is lon,lat - not lat,lon - I made the same mistake but the docs are correct.

Related

Using mongolite in R to extract individual array items

I'm using mongolite in R to read a mongo collection with the following structure:
[{_id: 0, date: 20221201, dailyAnswer:[
{question:a,score:1},
{question:b,score:3},
{question:c,score:2}
]},
{_id: 1, date: 20221201, dailyAnswer:[
{question:a,score:3},
{question:b,score:2},
{question:c,score:1}
]},
{_id: 0, date: 20221202, dailyAnswer:[
{question:a,score:2},
{question:b,score:2},
{question:c,score:3}
]},
{_id: 1, date: 20221202, dailyAnswer:[
{question:a,score:3},
{question:b,score:1},
{question:c,score:1}
]}]
For each document I'd like to extract each question score into a column, with the table structure:
_id | date | question_a_score | question_b_score | question_c_score
In MongoDB Compass I've written a query to extract them:
{
q_a_score: { $arrayElemAt: [ "$dailyAnswer.score",0]},
q_b_score: { $arrayElemAt: [ "$dailyAnswer.score",1]},
q_c_score: { $arrayElemAt: [ "$dailyAnswer.score",2]}
}
Which returns:
[{
_id: 0,
question_a_score:1,
question_b_score:3,
question_c_score:2},
...,
{
_id: 1,
question_a_score:3,
question_b_score:1,
question_c_score:1}
}]
However, I'm not sure whether to use the $aggregate or $find methods in mongolite in R, and how to structure the pipeline or query arguments in those methods respectively.
Use the aggregate method with the $project and $arrayElemAt operators:
checkins_questions <- collection_connection$aggregate(pipeline = '[{"$project": {"dailyAnswerScore1": { "$arrayElemAt": [ "$dailyAnswer.score", 0 ] },
"dailyAnswerScore2": { "$arrayElemAt": [ "$dailyAnswer.score", 1 ] },
"dailyAnswerScore3": { "$arrayElemAt": [ "$dailyAnswer.score", 2 ] }}}]')

ST_Distance between LineString and Point in Azure Cosmos DB

I've a route which is stored as a set of points.
{
"id": "9fc9b1e9-6062-4c65-820d-992569618883",
"shape": [
16.373056,
48.208333,
16.478611,
48.141111,
17.112778,
48.144722
]
}
I want to find nearest route to given point. For example: give me a route which is less than 25 km from point XY.
To be able to use built-in functions for geospatial querying in Azure Cosmos DB I need to make some changes to the document structure. My first attempt was to use LineString type.
{
"id": "9fc9b1e9-6062-4c65-820d-992569618883",
"shape": {
"type": "LineString",
"coordinates": [
[
16.373056,
48.208333
],
[
16.478611,
48.141111
],
[
17.112778,
48.144722
]
]
}
}
Than I query SELECT tf.id, ST_DISTANCE(tf.shape, {type: "Point", "coordinates": [16.6475, 48.319444]}) FROM tf WHERE ST_DISTANCE(tf.shape, {type: "Point", "coordinates": [16.6475, 48.319444]}) < 25000 with following result.
[
{
"id": "9fc9b1e9-6062-4c65-820d-992569618883",
"$1": 19683.798772898
}
]
Based on research it looks like plausible that ST_DISTANCE found a point on one route which is under 25 km.
When I have large document with many points (around 15000) the result is always []. It is an another dataset so the numbers are different.
SELECT tf.id, ST_DISTANCE(tf.shape, {type: "Point", "coordinates": [10.09, 52.831667]}) FROM tf WHERE ST_DISTANCE(tf.shape, {type: "Point", "coordinates": [10.09, 52.831667]}) < 10000 returns [].
What I tried next is to wrap every point as own data type and put them in array.
{
"id": "265de514-8995-4976-aeca-1f5d0ab0931d",
"shape": [
{
"type": "Point",
"coordinates": [
9.38626,
51.01587
]
},
{
"type": "Point",
"coordinates": [
9.38829,
51.01533
]
},
{
"type": "Point",
"coordinates": [
9.38853,
51.01554
]
}
...another set of 15000 points
]
}
When I execute the query like SELECT tf.id, locations.coordinates, ST_DISTANCE(locations, {type: "Point", "coordinates": [10.09, 52.831667]}) FROM tf JOIN locations IN tf.shape WHERE ST_DISTANCE(locations, {type: "Point", "coordinates": [10.09, 52.831667]}) < 10000 it returns all points on the route under 10 km.
[
{
"id": "265de514-8995-4976-aeca-1f5d0ab0931d",
"coordinates": [
9.97907,
52.77248
],
"$1": 9967.70776520528
},
{
"id": "265de514-8995-4976-aeca-1f5d0ab0931d",
"coordinates": [
9.97908,
52.77274
],
"$1": 9948.088917723748
}
...another set of points under 10 km
]
Do I use ST_DISTANCE correct and if yes why I don't get any results? Any service limitations? If no what is the correct way to implement this functionality? I see the possibility with the array of points but it seems somehow clunky.

Is there a way to transform these 2 arrays by using jq, into a set of objects, like in the example down below?

Example json data:
{
"data": [
{
"place": "FM346",
"id": [
"7_day_A",
"7_day_B",
"7_day_C",
"7_day_D"
],
"values": [
0,
30,
23,
43
]
},
{
"place": "LH210",
"id": [
"1_day_A",
"1_day_B",
"1_day_C",
"1_day_D"
],
"values": [
4,
45,
100,
9
]
}
]
}
what i need to transform it into:
{
"data": [
{
"place": "FM346",
"7_day_A": {
"value": 0
},
"7_day_B": {
"value": 30
},
"7_day_C": {
"value": 23
},
"7_day_D": {
"value": 43
}
},
{
"place": "LH210",
"1_day_A": {
"value": 4
},
"1_day_B": {
"value": 45
},
"1_day_C": {
"value": 100
},
"1_day_D": {
"value": 9
}
}
]
}
i have tried this:
{
data:[.data |.[]|
{
place: (.place),
(.id[]):
{
value: (.values[])
}
}]
}
(in jqplay: https://jqplay.org/s/f4BBtN9gwmp)
and this:
{
data:[.data |.[]|
{
place: (.place),
test:
[{
(.id[]):
{
value: (.values[])
}
}]
}]
}
(in jqplay: https://jqplay.org/s/pKIvQe1CzgX)
but they arent grouped in the way i wanted and it gives each value to each id, not the corresponding one.
I have been trying for some time now, but im new to jq and have no idea how to transform it this way, thanks in advance for any answers.
You can use transpose here, which can play a key role in converting the arrays to key/value pairs
.data[] |= {place} +
([ .id, .values ] | transpose | map({(.[0]): { value: .[1] } }) | add)
The solution works by converting the array-of-arrays [.id, .values] by transposing them, i.e. converting
[["7_day_A","7_day_B","7_day_C","7_day_D"],[0,30,23,43]]
[["1_day_A","1_day_B","1_day_C","1_day_D"],[4,45,100,9]]
to
[["7_day_A",0],["7_day_B",30],["7_day_C",23],["7_day_D",43]]
[["1_day_A",4],["1_day_B",45],["1_day_C",100],["1_day_D",9]]
With the transformation done, we construct an object with key as the zeroth index element and value as an object comprising of the value of first index element, and combine the results together with add
Demo - jqplay

Create multi-line chart on Kibana vega with legend based on inner buckets key values

I had a query response providing the data of format
{
"key_as_string": "2022-02-28T00:00:00.000Z",
"key": 1646006400000,
"doc_count": 2070,
"count": {
"doc_count": 3992,
"categories_count": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": 1,
"doc_count": 3070
},
{
"key": 5,
"doc_count": 316
},
{
"key": 3,
"doc_count": 178
},
{
"key": 0,
"doc_count": 26
},
{
"key": 7,
"doc_count": 26
},
{
"key": 6,
"doc_count": 20
},
{
"key": 2,
"doc_count": 10
}
]
}
}
}
How do I create a multi-line chart with legends that based each key value as one line and
y-axis is the doc_count and x-axis is the key_as_string time. I also need to handle the case that inner buckets will not output empty key count (missing key values that have count 0), but the key can be only in range of 0-7.
To create a multi-line chart refer to the gallery. This is a line chart. My advice, look around the gallery for things you want to include in your visualisation and extract those pieces, don't forget to refer to the documentation too.
For the 0 bucket, this is something I've encountered before. Try doing the term aggregation before the date_histogram aggregation. The date_histogram aggregation produces buckets for 0 counts whereas the term does not.

Watson doesn't get zeros

Let's say I have a conversation services configured in IBM Watson ready to recognize a number given in words and in pieces. For example, if I have the number 1320, it can be sent as thirteen twenty or thirteen two zero, etc.
In the first case I'll get something like this from a conversation service:
{
// ...
"entities": [
{
"entity": "sys-number",
"location": [
0,
5
],
"value": "13",
"confidence": 1,
"metadata": {
"numeric_value": 13
}
},
{
"entity": "sys-number",
"location": [
6,
12
],
"value": "20",
"confidence": 1,
"metadata": {
"numeric_value": 20
}
}
]
// ...
}
In the second case (thirteen two zero):
{
// ...
"entities": [
{
"entity": "sys-number",
"location": [
0,
5
],
"value": "13",
"confidence": 1,
"metadata": {
"numeric_value": 13
}
},
{
"entity": "sys-number",
"location": [
6,
14
],
"value": "2",
"confidence": 1,
"metadata": {
"numeric_value": 2
}
}
]
// ...
}
The big question here is: Where is my zero?
I know this question has been asked more than once, but none of the answers I found solved my current issues.
I've seen examples where a regular expression could be used, but that's for actual numbers, here I have words and Watson is the one who actually guesses the number.
Is there a way to obtain a third entry in my entities for that zero? or another work arround? or configuration I may be lacking?
I just tried this in Watson Assistant and it now gets the zero. With #sys-numbers enabled my utterance is thirteen two zero and I get these entities back:
entities: [
0: {
entity: "sys-number",
location: [0, 8],
value: "13",
confidence: 1,
metadata: {numeric_value: 13}
}
1: {
entity: "sys-number",
location: [9, 12],
value: "2",
confidence: 1,
metadata: {numeric_value: 2}
}
2: {
entity: "sys-number",
location: [13, 17],
value: "0",
confidence: 1,
metadata: {numeric_value: 0}
}
]
It might be the entities matching has been improved.

Resources