I'm using here maps in my application to calculate toll roads. But in GB it works differently. See picture below:
[]
As you can see, there are fields like name and tollSystemId for others than GB and only adminId for GB. Can you explain the reason for this? Or how should I work with GB tolls?
Here is a query sample:
https://tce.api.here.com/2/calculateroute.json?alternatives=0&commercial=1&cost_optimize=0¤cy=EUR&detail=1&driver_cost=0&emissionType=5&height=4&length=16.6&limitedWeight=25&linkattributes=sh%2Cle&maneuverattributes=po%2Cli%2Cno%2Crs&metricsystem=metric&mode=fastest%3Btruck%3Btraffic%3Adisabled&rollup=tollsys%2Ccountry%2Cnone&routeattributes=wp%2Cno%2Csh%2Csm%2Csc&shippedHazardousGoods=&tollVehicleType=3&trailerHeight=4&trailerNumberAxles=3&trailerType=2&trailersCount=1&truckRestrictionPenalty=soft&vehicleNumberAxles=2&vehicleWeight=24&vehicle_cost=0&waypoint0=48.85661400000001%2C2.3522219000000177&waypoint1=51.48158100000001%2C-3.1790899999999738&weightPerAxle=10.5&width=2.45
For a toll amount, these properties are possible:
"CurrencyAmount": {
"type": "object",
"properties": {
"adminId": {
"type": "string",
"description": "country's admin place id"
},
"amountInTargetCurrency": {
"type": "number",
"format": "double"
},
"country": {
"type": "string"
},
"languageCode": {
"type": "string",
"description": "Language code of toll system name"
},
"name": {
"type": "string",
"description": "Toll system name"
},
"tollSystemId": {
"type": "string"
}
}
}
So for your query in GB, only the adminId and amountInTargetCurrency are available in our database. So for your usage, you can check if a property key is available in the response before trying to use its value.
So with respect to your query, the truck, goes through three toll roads (1 in France and 2 in Great Britain). The details of each toll road is returned in the response. For France the details are:
{
"tollSystemId": "5171",
"name": "SANEF",
"languageCode": "ENG",
"amountInTargetCurrency": 65.3
}
and for Great Britain, this information is returned for the two toll roads:
{
"adminId": "20248595",
"amountInTargetCurrency": 2.25
},
{
"adminId": "20287683",
"amountInTargetCurrency": 2.25
}
You can see for each toll road, the amount is available in the response. So we have toll information in Great Britain. If you look further in the response you see the total toll cost is 69.8 which is the sum of the toll in France and the tolls in GB.
Related
I am currently trying to use HERE's API to calculate some tours for trucks.
First, I'm using the Tour Planning API to calculate the optimal tours.
In the second step, I'm using the Route Planning API to visualize these tours and get the exact routes that should be driven.
Somehow I've some problems using the avoid features to not plan routes with U-turns and difficult turns. I've come up with a small example that shows the problem. This is my example tour planning request:
{
"fleet": {
"types": [
{
"id": "1561fwef8w1",
"profile": "truck1",
"costs": {
"fixed": 1.0,
"distance": 5.0,
"time": 0.000001
},
"capacity": [20],
"amount": 1,
"shifts" : [
{
"start" : {
"time" : "2022-12-12T06:00:00Z",
"location" : {"lat": 51.4719851907272,"lng": 7.31300969864971}
},
"end" : {
"time" : "2022-12-12T16:00:00Z",
"location" : {"lat": 51.4807604,"lng": 7.3152156}
}
}
]
}
],
"profiles": [
{
"type": "truck",
"name": "truck1",
"avoid" : {
"features" : ["difficultTurns", "uTurns"]
}
}
]
},
"plan": {
"jobs": [
{
"id": "job_0",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4736547333341,"lng": 7.29935641079885},
"duration": 300
}
]
}
]
}
},
{
"id": "job_1",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.473125253443,"lng": 7.28609119643401},
"duration": 300
}
]
}
]
}
},
{
"id": "job_2",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4871939377375,"lng": 7.30587404313616},
"duration": 300
}
]
}
]
}
}
]
}
}
The answer is a tour that is 7.1 km long and takes 43 minutes. I'm now asking the route planning API to give me the exact route with the following request:
https://router.hereapi.com/v8/routes?via=51.4736547333341,7.29935641079885!stopDuration=300&via=51.473125253443,7.28609119643401!stopDuration=300&via=51.4871939377375,7.30587404313616!stopDuration=300&transportMode=truck&origin=51.4719851907272%2C7.31300969864971&destination=51.4807604%2C7.3152156&return=summary&apikey={API_KEY}&departureTime=2022-12-12T06%3A00%3A00&routingMode=short&avoid%5Bfeatures%5D=difficultTurns%2CuTurns
The answer now is a route which is 10.8 km long and takes 72 minutes. So the exact route is now more then 3 km longer for this short route. For lager routes I've already seen differences of 15km and more.
When not putting the avoid U-turns and difficult turns features into the requests, the routes have a roughly similar length. In this small example the route of the tour planning API is 6.4 km and the route of the route planning API 6.9 km which is a acceptable difference.
I'm not sure if route planning API and tour planning API are handling U-turns and difficult turns differently or if there is any way to get the exact routes directly from the tour planning API. Does anybody know how I can get properly planned tours with using the tour planning API and avoiding the difficult turns?
In the problem from the description difficult/uTurn happens at the driver stop itself. Tour planning does not consider the difficult turns at the stop. However, they are considered if they are on the way. That led to the discrepancy in the results from Tour Planning and Routing.
What exactly does the PROXIMITY parameter describe, and why do ORIGIN and TO always have the same PROXIMITY?
Please see the below json snippet taken from the HERE Incident API for an example -
EDIT:
My question is not specific to the json example below but a more general question regarding the meaning of the PROXIMITY parameter.
For instance, "midway between" is pretty self explanatory. What does it mean for a traffic incident to be "at" two points or "past" two points?
In addition, for all the data I have looked at ORIGIN:PROXIMITY:DESCRIPTION is always the same as TO:PROXIMITY:DESCRIPTION. Why?
{
"INTERSECTION": {
"ORIGIN": {
"ID": "",
"STREET1": {
"ADDRESS1": "Pletschenau"
},
"STREET2": {
"ADDRESS1": "Schillerweg"
},
"COUNTY": "Calw",
"STATE": "",
"PROXIMITY": {
"ID": "MID",
"DESCRIPTION": "midway between"
}
},
"TO": {
"ID": "",
"STREET1": {
"ADDRESS1": "Pletschenau"
},
"STREET2": {
"ADDRESS1": "Birkenweg"
},
"COUNTY": "Calw",
"STATE": "",
"PROXIMITY": {
"ID": "MID",
"DESCRIPTION": "midway between"
}
}
},
"GEOLOC": {
"ORIGIN": {
"LATITUDE": 48.73873,
"LONGITUDE": 8.73767
},
"TO": [{
"LATITUDE": 48.74108,
"LONGITUDE": 8.73581
}]
}
}
```
We expecting that your use case does match with the example as follows https://developer.here.com/documentation/examples/rest/traffic/traffic-incidents-via-proximity
This example retrieves traffic incident information related to a specific area of Berlin, as defined by a radius of 15km around a specific point (the prox parameter)
The start(origin) and to(destination) both represents same waypoint here. this explains why these two are not different. In case your API call is different, please share the rest API call.
When I am sending text using Watson api NLU with my city which is located in India. I am getting empty entity. It should be come with data entity location. So how can i solve this problem in watson NLU.
The sentence being sent is:
mba college in bhubaneswar
where Bhubaneswar is the city
So based on your comment sentence of:
"mba college in bhubaneswar"
Putting that into NLU and entity detection fails with:
Error: unsupported text language: unknown, Code: 400
The first issue is that because no language is specified, it tries to guess the language. But there is not enough there to guess (even if it is obvious to you).
The second issue is, even if you specify the language it will not fully recognise. This is because it's not a real sentence, it's a fragment.
NLU doesn't just do a keyword lookup, It tries to understand the parts of speech (POS) and from that, determine what the word means.
So if I give it a real sentence it will work. For example:
I go to an MBA college in Bhubaneswar
I used this sample code:
import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, RelationsOptions
ctx = {
"url": "https://gateway.watsonplatform.net/natural-language-understanding/api",
"username": "USERNAME",
"password": "PASSWORD"
}
version = '2017-02-27'
text = "I go to an MBA college in Bhubaneswar"
#text = "mba college in bhubaneswar"
nlu = NaturalLanguageUnderstandingV1(version=version, username=ctx.get('username'),password=ctx.get('password'))
entities = EntitiesOptions()
relations = RelationsOptions()
response = nlu.analyze(text=text, features=Features(entities=entities,relations=relations),language='en')
print(json.dumps(response, indent=2))
That gives me the following results.
{
"usage": {
"text_units": 1,
"text_characters": 37,
"features": 2
},
"relations": [
{
"type": "basedIn",
"sentence": "I go to an MBA college in Bhubaneswar",
"score": 0.669215,
"arguments": [
{
"text": "college",
"location": [
15,
22
],
"entities": [
{
"type": "Organization",
"text": "college"
}
]
},
{
"text": "Bhubaneswar",
"location": [
26,
37
],
"entities": [
{
"type": "GeopoliticalEntity",
"text": "Bhubaneswar"
}
]
}
]
}
],
"language": "en",
"entities": [
{
"type": "Location",
"text": "Bhubaneswar",
"relevance": 0.33,
"disambiguation": {
"subtype": [
"IndianCity",
"City"
],
"name": "Bhubaneswar",
"dbpedia_resource": "http://dbpedia.org/resource/Bhubaneswar"
},
"count": 1
}
]
}
If it's a case you are only going to get fragments to scan, then #ReeceMed solution will resolve it for you.
Screenshot of NLU Service responseIf the NLU service does not recognise the city you have entered you can create a custom model using Watson Knowledge Studio which can then be deployed to the NLU Service, giving customised entities and relationships.
I have taken the Blog App, added a Category ContentType as a field in the BlogPost ContentType and built a query to factor Category into the results.
But I am having trouble with the In-ValueProvider. Following the example here the Visual Query Designer seems to be ignoring the incoming value from my ModuleDataSource.
I have double checked the In-Stream name, my Entity names, case, TestParameters, etc. Are there any known bugs in 2sxc 8.44 and up that would cause this issue? What have I missed?
In this case I am using a RelationshipFilter. Relationship is "Category". Filter is "[In:Config:Category]". I can switch out to a [Querystring:Category] and that works fine and runs all my code.
Thanks for reading.
OK I found a workaround.
It turns out that the In-ValueProvider is working but it's struggling with the Category of my BlogPost I think because Category is an entity.
For background I have a BlogPost ContentType, a Category ContentType, and an Articles Home Header ContentType. Articles Home Header sets both the header info for the articles page and the Category entity.
For some reason the RelationshipFilter is having trouble comparing the Category entities between Articles Home Header and BlogPost. I tried the following for my Filter and neither worked:
[In:Config:Category]
[In:Config:Category:Title]
I wonder if this is a case sensitivity issue, a bug, or if I am just misunderstanding the filter syntax.
To work around I created a temp field called TempCategory in my Articles Home Header and used [In:Config:TempCategory] for the filter.
That worked.
For reference here is a snippet from the Query:
{
"Config": [
{
"Title": "Coaching Articles",
"SubTitle": "",
"Image": "/Portals/0/uploadedimages/AcademicPrograms/Christ_College/crosswise-hero.jpg",
"ImageAlt": "Crosswise stained glass",
"Category": [
{
"Id": 2716,
"Title": "Coaching"
}
],
"Id": 3118,
"Modified": "2016-06-21T10:44:21.9Z",
"_2sxcEditInformation": {
"sortOrder": 0
}
}
],
"Paging": [
{
"Title": "Paging Information",
"PageSize": 10,
"PageNumber": 1,
"ItemCount": 0,
"PageCount": 0,
"Id": 0,
"Modified": "0001-01-01T00:00:00Z",
"_2sxcEditInformation": {
"entityId": 0,
"title": "Paging Information"
}
}
],
,
"Default": [
{
"Title": "Protect Your Players and Your Program: An Athletic Leader's Legal Duties",
"UrlKey": "an-athletic-leaders-legal-duties",
"PublishingGroup": null,
"PublicationMoment": "2016-06-15T00:00:00Z",
"Image": "/Portals/0/uploadedimages/AcademicPrograms/Graduate/Coaching/an_athletic_leaders_legal_duty.jpg",
"ImageSquare": false,
"Teaser": "<p>When the clock started on the new year earlier this month, all but one state joined the growing legal effort to protect and prevent concussions and head injuries among America’s young.</p>",
"Body": "<p><strong>When the clock started on the new year earlier this month,</strong> all but one state joined the growing legal effort to protect and prevent concussions and head injuries among America’s young.</p>\n<p>As sports-related injuries and issues continue to dominate the headlines and influence programs throughout the country, laws like “return-to-play” are becoming a sign of the times when it comes to protecting players and athletic programs alike. The world of athletics is experiencing a significant shift in the perception of the roles and responsibilities of coaches, schools and athletic personnel.</p>",
"DesignedContent": [],
"Tags": [
{
"Id": 2576,
"Title": "coaching"
},
{
"Id": 2575,
"Title": "management"
},
{
"Id": 2574,
"Title": "sports"
},
{
"Id": 3035,
"Title": "legal"
}
],
"Author": [
{
"Id": 3030,
"Title": "Shaleek Blackburn"
}
],
"ImageAlt": "Referee holding a red",
"Thumbnail": "",
"ThumbnailAlt": "",
"RelatedArticles": [
{
"Id": 2564,
"Title": "Athletic Personnel's Duty To Warn"
},
{
"Id": 2565,
"Title": "Get A Better Grip On Bullying"
},
{
"Id": 2717,
"Title": "Good Coaching Develops Exceptional Athletes and People"
}
],
"Category": [
{
"Id": 2716,
"Title": "Coaching"
}
],
"ArticleRelationships": null,
"Id": 2513,
"Modified": "2016-06-15T19:32:17.913Z",
"_2sxcEditInformation": {
"entityId": 2513,
"title": "Protect Your Players and Your Program: An Athletic Leader's Legal Duties"
}
}
]
}
I want to find the ceo of IBM. What would be the MQL query for this?
The MQL for this search looks like the following.
This particular instance may be a tat more complicated than necessary because I got it initially produced from a Freebase interactive search and then simply added/improved the filters manually.
I verified it with various company names with relative success, i.e. it works provided that the underlying data is properly codified in Freebase (some companies are missing, for some companies the leadership data is incomplete etc.)
There are a few tricks to this query:
the company name in u0 fitler needs to match precisely the company name as recorded in Freebase. You could use a contains predicate rather than an equal one, but that could introduce many irrelevant hits. For example you need to use "IBM", "Apple Inc.", "General Motors" rather than common alternatives to these names ("International Business Machines", "Apple", "GM"...")
the u1 filter, on the leadership role is expressed in a extensive One of predicate because unfortunately the nomenclature for these roles is relatively loose, with duplicates (e.g. could be CEO or Chief Executive Officer) and with the fact that the role of CEO is often coupled with other corporate roles such as Chairman [of the board] and/or President etc. I hand picked this list by first looking up (in Freebase) the instances of Leadership Roles which contained "CEO" or "Chief Executive".
the u2 filter expresses that the to date should be empty, to select only the person currently in office, as opposed to former CEOs (for which hopefully Freebase recorded the end date of their tenure).
Depending on your application, you may need to test that the query returns one and exactly one record, and adapt accordingly if it doesn't.
Freebase MQL editor is a convenient tool test and edit with this kind of queries.
[
{
"from": null,
"id": null,
"limit": 20,
"organization": {
"id": null,
"name": null,
"optional": true
},
"person": {
"id": null,
"name": null,
"optional": true
},
"role": {
"id": null,
"name": null,
"optional": true
},
"s0:type": [
{
"id": "/organization/leadership",
"link": [
{
"timestamp": [
{
"optional": true,
"type": "/type/datetime",
"value": null
}
],
"type": "/type/link"
}
],
"type": "/type/type"
}
],
"sort": "s0:type.link.timestamp.value",
"title": null,
"to": null,
"type": "/organization/leadership",
"u0:organization": [
{
"id": null,
"name": "IBM",
"type": "/organization/organization"
}
],
"u1:role": [
{
"id": null,
"name|=": ["Chief Executive Officer", "President and CEO", "Chairman and CEO", "Interim CEO", "Interim Chief Executive Officer", "Founder and CEO", "Chairman, President and CEO", "Managing Director and CEO", "Executive Vice President and Chief Operating Officer", "Co-Founder, Chairman and Chief Executive Officer"],
"type": "/organization/role"
}
],
"u2:to": [
{
"value": null,
"optional": "forbidden"
}
]
}
]
Sample return (for "IBM", specifically)
{
"code": "/api/status/ok",
"result": [{
"from": "2012-01-01",
"id": "/m/09t7b08",
"organization": {
"id": "/en/ibm",
"name": "IBM"
},
"person": {
"id": "/en/virginia_m_rometty",
"name": "Virginia M. Rometty"
},
"role": {
"id": "/en/chairman_president_and_ceo",
"name": "Chairman, President and CEO"
},
"s0:type": [{
"id": "/organization/leadership",
"link": [{
"timestamp": [{
"type": "/type/datetime",
"value": "2010-01-23T08:02:57.0006Z"
}],
"type": "/type/link"
}],
"type": "/type/type"
}],
"title": "Chairman, President and CEO",
"to": null,
"type": "/organization/leadership",
"u0:organization": [{
"id": "/en/ibm",
"name": "IBM",
"type": "/organization/organization"
}],
"u1:role": [{
"id": "/en/chairman_president_and_ceo",
"type": "/organization/role"
}],
"u2:to": []
}