Users want to have a maximum of three relevant keywords to tag the articles - azure-cognitive-services

We are currently working on a project to automate the process for our articles preparation team.
We are using Azure Cognitive Service’s Text Analytics to extract key phrases from the articles. However, it extracted nearly 10 keywords.
Is that any weight can be provided to show the importance/quality/relevant of the key phase? or any other better approach to pick top N key phase ?

The key phrase extraction tool returns the phrases in order of importance. If you select the top 3, you should get the three most important phrases for the document.
What is text summarization in Azure Cognitive Service for Language (preview)? - Azure Cognitive Services | Microsoft Docs

Related

How to build Face/ Image Classifier in Azure ML like Google Photos

I need to build an image classification model in Azure ML- which initially takes an input from Phone (A check in app which takes information like ID and also we will capture the image of the person- Here ID is used to tag the image) which will be redirected to data storage. once it's done, we will upload the n number of images of person to the data storage, it should able to classify the image based on facial recognition and should categorize as separate image folder for different person( Just like Google Photos). In short, If there's a 100 unique people come for check in and during the event if we click random images of these 100 unique persons, when we load this data to blob - it should categorize the persons separately.
Can I go with approach-
1.Check in app-- Loads image with tag
2.Blob- store the image
3. custom vison- ML classifier
4.Loding n number of images to blob
5. comparing the image with check in app loaded image and categorizing as album just like google photos
6. Loading albums to app to make attendees to see the images
Please guide me with the solution and services need to be considered to make this possible in azure
Thanks in adavance
Within Azure you need to look into Cognitive Services, with more information located here: https://azure.microsoft.com/en-us/services/cognitive-services/
Azure Cognitive Services is substantially surfaced as a series of API endpoints. In your example, you can post images from the mobile device to the Azure endpoint, where you can train the services to recognize individuals and have it return a JSON package of the people in the picture, or have it place rectangles around those people in a picture, etc. Other Cognitive Services include those related to images, speech, video, etc.
The Face API maps to your scenario well: https://azure.microsoft.com/en-us/services/cognitive-services/face/
https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/#overview

Can I get reviews or stars for hotels or restaurants in HERE Geocoding & Search API?

I can't seem to find anything in the documentation (link), but the previous version of the Places API (link) said:
Certain types of categories, such as restaurants or hotels, are ranked via a "recommendations-style" algorithm where measures of popularity or quality, such as number of stars or reviews, are taken into account.
Does that mean that I would be able to retrieve info on number of starts for a hotel or ratings for restaurants?
Rating is Rich content. Rich content is not covered within the base or extended content and is generally provided by third-party data
suppliers.
See more on references from Geocoding and Search API 7 Discover API here https://developer.here.com/documentation/geocoding-search-api/dev_guide/topics/endpoint-discover-brief.html

Google cloud usage API

with my company we were interested in the translation API.
I need to know if there is a way to retrieve the usage state of an account for a specific period.
(For example I would like to be able to know that last month I translated X characters that corresponds to Y USD).
I'm sure that such an API exists, but I really can't find any reference to it.
Translation API bills per character translated and counts translated characters for the whole project in order to create a bill for the customer.
To view your current billing status, including usage and your current bill, see the Billing page (GCP Console => Billing). Billing reports for a particular service contain fields Product name (Translate), Usage (number of characters) and Cost.
See Cloud Billing > Doc > View your billing reports and cost trends
More detailed information about consumption of the Translation API is not provided. At this time Translation API does not have such functionality.
A similar problem was raised on Issue Tracker in 2017 but to no avail: https://issuetracker.google.com/35903950.

Localized ML Kit Image labels

Is it possible to obtain labels from ML Kit Image Labeling in a given language?
I easily manage to get them in english...
but I need different languages... any suggestion?
In the docs I found this
In addition the text description of each label that ML Kit returns, it also returns the label's Google Knowledge Graph entity ID. This ID is a string that uniquely identifies the entity represented by the label, and is the same ID used by the Knowledge Graph Search API. You can use this string to identify an entity across languages, and independently of the formatting of the text description.
Maybe it is possible to use a graph entity id to translate the label?
Or what else can I do?
As the Firebase support told me via mail the day Feb 1, 2019
Unfortunately at the moment it is not possible to use other languages for image labeling, however I have created a feature request for our engineering team to take a look at and consider for future releases. There's no telling on when this will be ready, but you can keep an eye on the Firebase Release Notes to be informed of the latest from Firebase.
On the other hand the Knowledge Graph entity ID can be used to find entities in the Google Knowledge Graph but at the moment it is not possible to connect these results with the image labeling in order to translate the label.
I firstly tryed to play with the Graph entity ID, in order to traslate the label description... but since i used the in-device Firebase library, i obtained some ID that Knowledge Graph wasn't able to recognize (for instance: Label: Flower, Confidence: 0.97793585, EntityID: /m/0c9ph5).
I ended up using a free translation API sevices (Yandex) wich is free for the first million translated character a day.

Cognitive Services - How can I access Bing search banner information (image and birth date)

Bing returns banner information including an image and birth date at the top of the search results when you enter a search such as 'Lady Gaga birth date'.
I would like to access this information using Microsoft Cognitive Services for an app using celebrity ages.
When I examine the httpResponseMessage return from the cognitive services call I can't find the image or birth date that appear on the top of the search results page in the body of the return.
Can you point me in the right direction to get this information from a cognitive services call. Similarly, I'd like to be able to access the summary information that appears on the top right of the search results. Any links to advanced documentation or samples on using the cognitive services Bing web search API would also be appreciated.
Thank you for you help.
Those blocks are all custom. It's less about cognitive services and more about tapping into data, translating and creating a presentation.
If you go to the Bing Web Search API site. There select "seattle seahawks" with Response Filter images. You can see that there is some text(name) and a contentUrl. You could write a parser for this to process into the pages.
Although the more logical choice would be actually calling the Bing website or Wikipedia directly. And parsing that result. As you already know it contains the information you want.

Resources