I am using geocoding autocomplete to display found locations after user typed something. Afterwards I am using geocoding with given location ID to fetch detailed information about selected location.
It worked well, till I tried to select "Russia"
Here is my first request to geocoding autocomplete via https://autocomplete.geocoder.api.here.com/6.2/suggest.json
{
"app_id": "xxx",
"app_code": "xxx",
"query": "russia",
"resultType": "areas"
}
And here is the (simplified) response:
{
"suggestions": [
{
"label": "Russia",
"language": "en",
"countryCode": "RUS",
"locationId": "NT_Ya5FK7rlnK5m6PEDf7BwfA",
"address": {
"country": "Russia"
},
"matchLevel": "country"
},
...
]
}
The second request that I send to geocoding via https://geocoder.api.here.com/6.2/geocode.json with following arguments
{
"app_id": "xxx",
"app_code": "xxx",
"locationId": "NT_Ya5FK7rlnK5m6PEDf7BwfA",
"jsonattributes": "1",
"gen": "9",
"language": "en"
}
As you can see - location id is the same as in response to the first query. I suggest to become details to country russia, but instead, I receive empty response:
{
"response": {
"metaInfo": {
"timestamp": "2019-08-20T21:02:54.652+0000"
},
"view": []
}
}
After some troubleshooting I noticed, that geocoding also works with simple form input. I directly tried this request on the example page. In searchtext I type "russia", and voila, I got response (simplified):
{
"Response": {
"MetaInfo": {
"Timestamp": "2019-08-21T12:36:07.874+0000"
},
"View": [
{
"_type": "SearchResultsViewType",
"ViewId": 0,
"Result": [
{
...
"Location": {
"LocationId": "NT_tcqMSofTaW297lvniHjdXD",
"LocationType": "area",
"Address": {
"Label": "Россия",
"Country": "RUS",
"AdditionalData": [
{
"value": "Россия",
"key": "CountryName"
}
]
},
...
}
}
]
}
]
}
}
But wait, what? The ID form autocomplete was NT_Ya5FK7rlnK5m6PEDf7BwfA and from geocoding is NT_tcqMSofTaW297lvniHjdXD
Why do I receive wrong location ID from geocoding autocomplete?
We just implemented HERE API in our product, and we are testing it currently with real use-case input, and so we found this bug.
Is it just one location, that has inconsistent locationId reference, or are there some more? How can we workaround this error? Is it common?
Geocoder generates LocationId from a set of values, which uniquely identify the object. This set includes different types of data such as result type, base names and attribution of admin hierarchy, street name, house number, etc. From all this information Geocoder generates a hash value which is expected to be unique.
Using only base names guarantees that LocationId does not change if e.g. additional language variants are added to country or state name. But if the main official country or state name changes, all the areas and addresses within this country or state will get new LocationId. So using LocationId from Geocoder Autocomplete API will not always work with Geocoder API,
We will update our documentation to reflect this as the current documentation may be a bit misleading.
Related
As a CosmosDB (SQL API) user I would like to index all non object or array properties inside of an object.
By default the index in cosmos /* will index every property, our data set is getting extremely large (expensive) and this strategy is no longer optimal. We store our metadata at the root and our customer data wrapped inside of an object property data.
Our platform restricts queries on the data path to be value type properties, this means that for us to index objects and arrays nested under the data path is just slowing down writes and costing RUs to store but never getting used.
I have tried several iterations of index policies but cannot find one that fits. Example:
{
"partitionKey": "f402a704-19bb-4f4d-93e6-801c50280cf6",
"id": "4a7a11e5-00b5-4def-8e80-132a8c083f24",
"data": {
"country": "Belgium",
"employee": 250,
"teammates": [
{ "name": "Jake", "id": 123 ...},
{ "name": "kyle", "id": 3252352 ...}
],
"user": {
"name": "Brian",
"addresses": [{ "city": "Moscow" ...}, { "city": "Moscow" ...}]
}
}
}
In this case I want to only index the root properties as well as /data/employee and /data/country.
Policies like /data/* will not work because it would then index /data/teammates/name ... and so on.
/data/? => assumes data is a value type which it never will be so this doesn't work.
/data/ and /data/*/? and /data/*? are not accepted by cosmos as valid policies.
Additionally I can't simply exclude /data/teammates/ and /data/user/ because what is inside of data is completely dynamic so while that might cover this use case there are several 100k others that it would not.
I have tried many iterations but it seems that options don't work for various reasons, is there a way to support what I am trying to do?
This indexing policy will index the properties you are asking for.
{
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/partitionKey/?"
},
{
"path": "/data/country/?"
},
{
"path": "/data/employee/?"
}
],
"excludedPaths": [
{
"path": "/*"
}
]
}
What exactly does the PROXIMITY parameter describe, and why do ORIGIN and TO always have the same PROXIMITY?
Please see the below json snippet taken from the HERE Incident API for an example -
EDIT:
My question is not specific to the json example below but a more general question regarding the meaning of the PROXIMITY parameter.
For instance, "midway between" is pretty self explanatory. What does it mean for a traffic incident to be "at" two points or "past" two points?
In addition, for all the data I have looked at ORIGIN:PROXIMITY:DESCRIPTION is always the same as TO:PROXIMITY:DESCRIPTION. Why?
{
"INTERSECTION": {
"ORIGIN": {
"ID": "",
"STREET1": {
"ADDRESS1": "Pletschenau"
},
"STREET2": {
"ADDRESS1": "Schillerweg"
},
"COUNTY": "Calw",
"STATE": "",
"PROXIMITY": {
"ID": "MID",
"DESCRIPTION": "midway between"
}
},
"TO": {
"ID": "",
"STREET1": {
"ADDRESS1": "Pletschenau"
},
"STREET2": {
"ADDRESS1": "Birkenweg"
},
"COUNTY": "Calw",
"STATE": "",
"PROXIMITY": {
"ID": "MID",
"DESCRIPTION": "midway between"
}
}
},
"GEOLOC": {
"ORIGIN": {
"LATITUDE": 48.73873,
"LONGITUDE": 8.73767
},
"TO": [{
"LATITUDE": 48.74108,
"LONGITUDE": 8.73581
}]
}
}
```
We expecting that your use case does match with the example as follows https://developer.here.com/documentation/examples/rest/traffic/traffic-incidents-via-proximity
This example retrieves traffic incident information related to a specific area of Berlin, as defined by a radius of 15km around a specific point (the prox parameter)
The start(origin) and to(destination) both represents same waypoint here. this explains why these two are not different. In case your API call is different, please share the rest API call.
I am trying to let any user see only the name and email parameter.
"users": {
"kanyesUID": {
"name": "Kanye West",
"email": "kwest#gmail.com",
"sensitive_info": "a"
},
"taylorsUID": {
"name": "Taylor Swift",
"email": "tswift#gmail.com",
"sensitive_info": "a"
},
"seacrestsUID": {
"name": "Ryan Seacrest",
"email": "rseacrest#gmail.com",
"isAdmin": true
}
}
For the following code, I should get this output:
"kanyesUID": {
"name": "Kanye West",
"email": "kwest#gmail.com"
},
"taylorsUID": {
"name": "Taylor Swift",
"email": "tswift#gmail.com"
},
"seacrestsUID": {
"name": "Ryan Seacrest",
"email": "rseacrest#gmail.com"
}
firebase.database().ref('/users/').once('value').then(function(snapshot) {
print_snapshot_json(snapshot);
});
What would the rules need to be to allow this to happen?
You can't do this with security rules. You're attempting to use security rules as a filter to determine which fields will appear in search results, but that's not supported. A query must be able to read all the data that would be returned by the query. What you can do instead is split the public and private fields of each user into separate top-level nodes, so they can be queried for and protected separately.
For more details, please read the documentation, specifically the section titled "rules are not filters".
I've gathered json via Linkedin API.
Node:
[GetCompanyPageStatistics][followStatistics][regions]
stores information about followers localizations, for example:
"regions": {
"_total": 9,
"values": [
{
"entryKey": "pl-8172",
"entryValue": "18"
},
{
"entryKey": "pl-8355",
"entryValue": "2"
},
{
"entryKey": "pl-8218",
"entryValue": "1"
},
{
"entryKey": "de-4944",
"entryValue": "1"
}
]
}
Given values consit of country code and region code. Is there a way to decode the region? Linkedin API documentation isn't very helpful:
https://developer.linkedin.com/docs/reference/geography-codes
(there's only eu.pl for Poland without any information about region-specific codes)
Looks like there's a v2 api to get that data:
https://learn.microsoft.com/en-us/linkedin/shared/references/v2/standardized-data/locations/regions
E.g., hitting https://api.linkedin.com/v2/regions/8172?oauth2_access_token=XXX gets you:
{
"country": "urn:li:country:pl",
"name": {
"locale": {
"country": "US",
"language": "en"
},
"value": "Warsaw, Masovian District, Poland"
},
"id": 8172,
"$URN": "urn:li:region:8172",
"states": []
}
When I am sending text using Watson api NLU with my city which is located in India. I am getting empty entity. It should be come with data entity location. So how can i solve this problem in watson NLU.
The sentence being sent is:
mba college in bhubaneswar
where Bhubaneswar is the city
So based on your comment sentence of:
"mba college in bhubaneswar"
Putting that into NLU and entity detection fails with:
Error: unsupported text language: unknown, Code: 400
The first issue is that because no language is specified, it tries to guess the language. But there is not enough there to guess (even if it is obvious to you).
The second issue is, even if you specify the language it will not fully recognise. This is because it's not a real sentence, it's a fragment.
NLU doesn't just do a keyword lookup, It tries to understand the parts of speech (POS) and from that, determine what the word means.
So if I give it a real sentence it will work. For example:
I go to an MBA college in Bhubaneswar
I used this sample code:
import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, RelationsOptions
ctx = {
"url": "https://gateway.watsonplatform.net/natural-language-understanding/api",
"username": "USERNAME",
"password": "PASSWORD"
}
version = '2017-02-27'
text = "I go to an MBA college in Bhubaneswar"
#text = "mba college in bhubaneswar"
nlu = NaturalLanguageUnderstandingV1(version=version, username=ctx.get('username'),password=ctx.get('password'))
entities = EntitiesOptions()
relations = RelationsOptions()
response = nlu.analyze(text=text, features=Features(entities=entities,relations=relations),language='en')
print(json.dumps(response, indent=2))
That gives me the following results.
{
"usage": {
"text_units": 1,
"text_characters": 37,
"features": 2
},
"relations": [
{
"type": "basedIn",
"sentence": "I go to an MBA college in Bhubaneswar",
"score": 0.669215,
"arguments": [
{
"text": "college",
"location": [
15,
22
],
"entities": [
{
"type": "Organization",
"text": "college"
}
]
},
{
"text": "Bhubaneswar",
"location": [
26,
37
],
"entities": [
{
"type": "GeopoliticalEntity",
"text": "Bhubaneswar"
}
]
}
]
}
],
"language": "en",
"entities": [
{
"type": "Location",
"text": "Bhubaneswar",
"relevance": 0.33,
"disambiguation": {
"subtype": [
"IndianCity",
"City"
],
"name": "Bhubaneswar",
"dbpedia_resource": "http://dbpedia.org/resource/Bhubaneswar"
},
"count": 1
}
]
}
If it's a case you are only going to get fragments to scan, then #ReeceMed solution will resolve it for you.
Screenshot of NLU Service responseIf the NLU service does not recognise the city you have entered you can create a custom model using Watson Knowledge Studio which can then be deployed to the NLU Service, giving customised entities and relationships.