Discrepancies between the routes calculated by Tour planning api and Route planning API [Here API] - here-api

I am currently trying to use HERE's API to calculate some tours for trucks.
First, I'm using the Tour Planning API to calculate the optimal tours.
In the second step, I'm using the Route Planning API to visualize these tours and get the exact routes that should be driven.
Somehow I've some problems using the avoid features to not plan routes with U-turns and difficult turns. I've come up with a small example that shows the problem. This is my example tour planning request:
{
"fleet": {
"types": [
{
"id": "1561fwef8w1",
"profile": "truck1",
"costs": {
"fixed": 1.0,
"distance": 5.0,
"time": 0.000001
},
"capacity": [20],
"amount": 1,
"shifts" : [
{
"start" : {
"time" : "2022-12-12T06:00:00Z",
"location" : {"lat": 51.4719851907272,"lng": 7.31300969864971}
},
"end" : {
"time" : "2022-12-12T16:00:00Z",
"location" : {"lat": 51.4807604,"lng": 7.3152156}
}
}
]
}
],
"profiles": [
{
"type": "truck",
"name": "truck1",
"avoid" : {
"features" : ["difficultTurns", "uTurns"]
}
}
]
},
"plan": {
"jobs": [
{
"id": "job_0",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4736547333341,"lng": 7.29935641079885},
"duration": 300
}
]
}
]
}
},
{
"id": "job_1",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.473125253443,"lng": 7.28609119643401},
"duration": 300
}
]
}
]
}
},
{
"id": "job_2",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4871939377375,"lng": 7.30587404313616},
"duration": 300
}
]
}
]
}
}
]
}
}
The answer is a tour that is 7.1 km long and takes 43 minutes. I'm now asking the route planning API to give me the exact route with the following request:
https://router.hereapi.com/v8/routes?via=51.4736547333341,7.29935641079885!stopDuration=300&via=51.473125253443,7.28609119643401!stopDuration=300&via=51.4871939377375,7.30587404313616!stopDuration=300&transportMode=truck&origin=51.4719851907272%2C7.31300969864971&destination=51.4807604%2C7.3152156&return=summary&apikey={API_KEY}&departureTime=2022-12-12T06%3A00%3A00&routingMode=short&avoid%5Bfeatures%5D=difficultTurns%2CuTurns
The answer now is a route which is 10.8 km long and takes 72 minutes. So the exact route is now more then 3 km longer for this short route. For lager routes I've already seen differences of 15km and more.
When not putting the avoid U-turns and difficult turns features into the requests, the routes have a roughly similar length. In this small example the route of the tour planning API is 6.4 km and the route of the route planning API 6.9 km which is a acceptable difference.
I'm not sure if route planning API and tour planning API are handling U-turns and difficult turns differently or if there is any way to get the exact routes directly from the tour planning API. Does anybody know how I can get properly planned tours with using the tour planning API and avoiding the difficult turns?

In the problem from the description difficult/uTurn happens at the driver stop itself. Tour planning does not consider the difficult turns at the stop. However, they are considered if they are on the way. That led to the discrepancy in the results from Tour Planning and Routing.

Related

Google analytics reporting dimensions & metrics are incompatible

We have custom dimension define in Google Analytics Data API v1Beta for extracting data from Google Analytics GA4 account.
This worked fine till yesterday.
Error:
Please remove customEvent:institution_id to make the request compatible. The request's dimensions & metrics are incompatible. To learn more, see https://ga-dev-tools.web.app/ga4/dimensions-metrics-explorer/
POST - https://analyticsdata.googleapis.com/v1beta/properties/{propertyId}:runReport
Body
{
"dateRanges": [
{
"startDate": "2022-08-29",
"endDate": "2022-12-07"
}
],
"dimensions": [
{
"name": "customEvent:institution_id"
},
{
"name": "pagePathPlusQueryString"
}
],
"dimensionFilter": {
"andGroup": {
"expressions": [
{
"filter": {
"fieldName": "customEvent:institution_id",
"inListFilter": {
"values": [
"47339486-a1e5-47be-abce-e4270af23rte"
]
}
}
},
{
"filter": {
"fieldName": "pagePathPlusQueryString",
"stringFilter": {
"matchType": "PARTIAL_REGEXP",
"value": "/clip/.+",
"caseSensitive": false
}
}
}
]
}
},
"metrics": [
{
"name": "screenPageViews"
}
],
"metricAggregations": [
"TOTAL"
],
"limit": "10000"
}
Dimensions that include the query string like pagePathPlusQueryString are only compatible with a limited set of dimensions & metrics.
This changed was announced 2022-09-13 Schema compatibility changes announcement.
It went live earlier this week. So the cause if the error is that the "customEvent:institution_id" dimension is not compatible with "pagePathPlusQueryString".

my city location is not available in IBM Watson api NLU

When I am sending text using Watson api NLU with my city which is located in India. I am getting empty entity. It should be come with data entity location. So how can i solve this problem in watson NLU.
The sentence being sent is:
mba college in bhubaneswar
where Bhubaneswar is the city
So based on your comment sentence of:
"mba college in bhubaneswar"
Putting that into NLU and entity detection fails with:
Error: unsupported text language: unknown, Code: 400
The first issue is that because no language is specified, it tries to guess the language. But there is not enough there to guess (even if it is obvious to you).
The second issue is, even if you specify the language it will not fully recognise. This is because it's not a real sentence, it's a fragment.
NLU doesn't just do a keyword lookup, It tries to understand the parts of speech (POS) and from that, determine what the word means.
So if I give it a real sentence it will work. For example:
I go to an MBA college in Bhubaneswar
I used this sample code:
import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, RelationsOptions
ctx = {
"url": "https://gateway.watsonplatform.net/natural-language-understanding/api",
"username": "USERNAME",
"password": "PASSWORD"
}
version = '2017-02-27'
text = "I go to an MBA college in Bhubaneswar"
#text = "mba college in bhubaneswar"
nlu = NaturalLanguageUnderstandingV1(version=version, username=ctx.get('username'),password=ctx.get('password'))
entities = EntitiesOptions()
relations = RelationsOptions()
response = nlu.analyze(text=text, features=Features(entities=entities,relations=relations),language='en')
print(json.dumps(response, indent=2))
That gives me the following results.
{
"usage": {
"text_units": 1,
"text_characters": 37,
"features": 2
},
"relations": [
{
"type": "basedIn",
"sentence": "I go to an MBA college in Bhubaneswar",
"score": 0.669215,
"arguments": [
{
"text": "college",
"location": [
15,
22
],
"entities": [
{
"type": "Organization",
"text": "college"
}
]
},
{
"text": "Bhubaneswar",
"location": [
26,
37
],
"entities": [
{
"type": "GeopoliticalEntity",
"text": "Bhubaneswar"
}
]
}
]
}
],
"language": "en",
"entities": [
{
"type": "Location",
"text": "Bhubaneswar",
"relevance": 0.33,
"disambiguation": {
"subtype": [
"IndianCity",
"City"
],
"name": "Bhubaneswar",
"dbpedia_resource": "http://dbpedia.org/resource/Bhubaneswar"
},
"count": 1
}
]
}
If it's a case you are only going to get fragments to scan, then #ReeceMed solution will resolve it for you.
Screenshot of NLU Service responseIf the NLU service does not recognise the city you have entered you can create a custom model using Watson Knowledge Studio which can then be deployed to the NLU Service, giving customised entities and relationships.

can we retrieve only the intent using watson conversation?

Currently we are using Watson Natural Language Classifier service (NLC) to get intent for an user's question. But configuring and maintaining NLC is becoming difficult, so was wondering whether it would be possible to only get intent of user's question using Watson Conversation section, only the intent not the response of dialog from the service.
The intent comes as part of the response back from conversation. If you set the parameter alternate_intents=true then the top 10 intents are returned.
You will still get the rest of the payload, but you can ignore it. I would recommend to create one dialog node with a condition of true and nothing else. This will prevent SPEL errors when not finding a matched node.
Your response will look something like this.
{
"alternate_intents": true,
"context": {
"conversation_id": "6c256e10-ba3b-4d2b-84fc-740853879d4f",
"system": {
"_node_output_map": { "True": [0] },
"branch_exited": true,
"branch_exited_reason": "completed",
"dialog_request_counter": 1,
"dialog_stack": [ { "dialog_node": "root" } ],
"dialog_turn_counter": 1
}
},
"entities": [],
"input": { "text": "test" },
"intents": [
{ "intent": "intent1", "confidence": 1.0 },
{ "intent": "intent2", "confidence": 0.9 },
{ "intent": "intent3", "confidence": 0.8 },
{ "intent": "intent4", "confidence": 0.7 },
{ "intent": "intent5", "confidence": 0.6 },
{ "intent": "intent6", "confidence": 0.5 },
{ "intent": "intent7", "confidence": 0.4 },
{ "intent": "intent8", "confidence": 0.3 },
{ "intent": "intent9", "confidence": 0.2 },
{ "intent": "intent10", "confidence": 0.1 }
],
"output": {
"log_messages": [],
"nodes_visited": [ "True" ],
"text": [ "" ]
}
}
All you need to reference is the json_response['intents']. Also if you only care about the intent, you do not need to keep sending back the context.
Just to add to this. NLC and Conversation use two very different learning models.
NLC uses "Relative Confidence"
Conversation uses "Absolute Confidence"
In the case of Relative, all confidences of items found will add up to 1. In layman terms, NLC automatically assumes that the answer can only be in the intents it has been given.
For Absolute, the confidences relate only to that intent. This means that conversation can understand that what you are saying may not be in the training it has been given. It also means that your intent list can come back empty.
So don't panic if something that was giving you 90% before is now giving you 60%. They are just scoring differently.

How do you get data from google analytics -v4 for acquisitions - campaigns?

Can someone give an example of the query to be used?How do we access the acquisitions and then campaign data from that ?
The simplest method would be to make an Analytics Reporting API V4 request with the ga:newusers metric and the ga:source, ga:medium, ga:campaign.
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet
{
"reportRequests":
[
{
"viewId": "1174",
"dateRanges":
[
{
"startDate": "2014-11-01",
"endDate": "2014-11-30"
}
],
"metrics":
[
{
"expression": "ga:newusers"
}
],
"dimensions":
[
{
"name": "ga:campaign"
},
{
"name": "ga:source"
},
{
"name": "ga:medium"
}
]
}
]
}
And again in the API Explorer.
The API also allows you to construct a cohort request to measure engagement overtime.
If you are new to Google's APIs, they make available many client libraries as well as set of quickstart guides.

Google Analytics Reporting API V4 Lifetime value requests - invalid dimensions/metrics

I'm trying to make calls the Analytics Reporting API V4 and keep getting back unspecific error messages when trying to use certain dimensions and metrics. For example, I consistently get
{
"error": {
"code": 400,
"message": "Unknown dimension(s): ga:acquisitionTrafficChannel",
"status": "INVALID_ARGUMENT"
}
}
when passing ga:acquisitionTrafficChannel, despite it being documented as a valid dimension. Similarly, I get
{
"error": {
"code": 400,
"message": "Selected dimensions and metrics cannot be queried together.",
"status": "INVALID_ARGUMENT"
}
}
when passing ga:acquisitionSourceMedium (documented here), even when not passing any metrics whatsoever.
Are the docs out of date? Is there some documentation elsewhere about valid combinations of dimensions and metrics?
All the Lifetime Value reports and thus ga:acquisition... dimensions are only valid for App views not web views.
Secondly the cohort/LTV dimensions can only be queried in within a cohort requests for example:
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet
{
"reportRequests": [
{
"viewId": "XXXX",
"dimensions": [
{
"name": "ga:cohort"
},
{
"name": "ga:acquisitionTrafficChannel"
}
],
"metrics": [
{
"expression": "ga:cohortSessionsPerUser"
}
],
"cohortGroup": {
"cohorts": [
{
"name": "cohort 1",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"startDate": "2015-08-01",
"endDate": "2015-09-01"
}
},
{
"name": "cohort 2",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"startDate": "2015-07-01",
"endDate": "2015-08-01"
}
}
],
"lifetimeValue": true
}
}
]
}
The error messages should probably be a bit clearer.
I ran into this problem as well. When I was in the Google Analytics Dashboard, I clicked on Acquisition->All Traffic->Channels and was fooled into thinking that I needed to combine the ga:acquisitionMedium dimension and ga:newUsers metric together.
When I clicked on ga:acquisitionMedium, it said that combining it with ga:newUsers was valid, despite the error that you mentioned in your question! In reality, I just needed to combine ga:medium and ga:newUsers together.
I know this is not the exact query that you were doing, but here is an example of how I queried New Users count where the dimension channel equaled "organic" (note that I am forming the JSON request with Javascript and then using JSON.stringify(req) to send it):
var req = {
reportRequests: [{
viewId: '<Your Google Analytics view ID>',
dimensions: [{ name: 'ga:medium' }],
dimensionFilterClauses: [{
filters: [{
dimensionName: 'ga:medium',
operator: 'EXACT',
expressions: ['organic']
}]
}],
dateRanges: [{ startDate: '2019-11-01', endDate: '2019-11-30' }],
metrics: [{ expression: "ga:newUsers" }]
}]
};
The above query returns 5,654, which is the same as seen in the "Acquisition" section of Google Analytics.
I definitely think the documentation and error message around this could be improved.

Resources