Calling the face and emotion API at the same time - microsoft-cognitive

My goal is to take the live camera sample and create an app that uses the emotion api and the face api at the same time. Whenever it detects a face it should say Gender , Age , Emotion , Emotion detection confidance in a one string.
I am having trouble with that because all of the function are async aurrnd it does frame analysis (analysis function) individually.
Thanks for your help.

I have tried calling the same API via classes about frame analysis try checking How to Analyze Videos in Real-time

Hello from Cognitive Services - Face API dev team, we are going to support emotion analyze during detection (in next release which should happen before April). It invoke the same interface provided by emotion API. So just be a little more patient and feel free to reach us if you have any further questions.
Bests,
Xuan (Sean) Hu.

Related

Batch Transcription and LUIS Integration

I have a requirement to do integration between the batch transcription and LUIS wherein I will pass the transcriptions as such to LUIS and get the intent of the audio.
As far as I know we can pass the data for intent analysis to LUIS as a query which accepts only 500 characters.
So here comes the question is it possible to pass the full transcription from speech to text batch transcription API to LUIS for intent analysis or we have to feed the data in chunks to LUIS ?
If we feed the data in chunks(500 characters) how we will get the overall intent of the audio, since different utterances may lead to different top level intent.
I have done a lot of research on this reading the microsoft documentations , but could not find any answer.
Please suggest on the best possible way to achieve this scenario.
In my opinion, I don't think we can get the intent of the audio accurately if feed the data in chunks. I think we'd better to limit the length of the character to no more than 500. If it is longer than 500, just return error message(or not allow it longer than 500).
By the way, is it possible to get rid of unimportant words before sending to LUIS ?
Here is LUIS integration with speech service https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/interactive-voice-response-bot#ai-and-nlp-azure-services
We do have a Telephony channel which is currently in private preview, and as such, comes with preview terms (no SLA, etc).
Here are the details about the preview: https://github.com/microsoft/BotFramework-IVR.

Microsoft Face API - faceId value for the same person is different with each API call

I'm using the Microsoft Face API to track people in front of a webcam by sending a screenshot from the camera to the API every second or so
If a particular person is in front of the camera for multiple API calls, the API should return the same faceId for that person in each response, but it is returning a new faceId for that person instead. This makes it impossible for me to know whether there is a new person in front of the camera, or a different person
This was not the case a couple of weeks ago, it's just something which has started happening recently
The parameters that I'm sending are...
returnFaceId:true,returnFaceLandmarks:false,returnFaceAttributes:age,gender
... the gender and age detection are working fine, it's just the faceId that I'm having problems with
Is there a limit to how many faceIds it'll assign per month or something? I can't find any reference to a limit in the documentation
here is Shuolin from Microsoft Cognitive Service Face Team. Referring to Face Detect API is provided (https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), each Detect API call will return a unique FaceId (even same face is used in different call). For your situation, I suggest that you can use Identify API to recognize person (https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).

Receiving Invalid personIDs in Identify response with MSFT face API

Was wondering if anyone has crossed upon this problem:
Using MSFT Cognitive Services Face API to persist Person Groups, Persons and Person Faces
Am sending an image to the Face-Identify API and receiving a candidate list that includes "phantom" personIds that are not persisted by me and are not listed in the person group used for identification.
When I run List Persons in a Person Group API I don't receive that personID.
Overall everything is working but for some images I get these invalid responses.
Any clue would be appreciated.
Hello from Microsoft Cognitive Service Face Team,
And really sorry for the inconvenience cause by our current training strategy. AFAIK, it is still computation/time cost to train a person group, that is why we make it as an asynchronous call and need training again after the person group is modified.
If you have any further problem, please feel free to update.

Export data from google analytics attribution model

is there a way to export the data from google analytics Attribution Model comparison tool? I'm searching through all the dimensions an measures but i was unable to find the correct measure.
Is this data available through Core API?
Is there a combination of measures to calculate the different models?
You can use Google's MCF (Multiple Channel Funnel) API:
https://developers.google.com/analytics/devguides/reporting/mcf/v3/
The model you can use seems to be limited (Only First and Last) but at least you can export your funnel path and built your own attribution much easier.
Hey I have asked google for an answer, they don't have this developed yet but there is an open request ticket for it, it could take a few months. I will keep track of this. To get the exact same dimension and metrics, bigquery can do the job.

how to use word cloud for twitter application

I am working on a twitter application where i want to show all trend in word cloud.
but i don't know how to use word cloud and which api i will use for twitter to do this.
I want like thisExample.
Please help me for this.
Basically to generate a tag cloud you need a list of words and their frequency. Twitter API does not provide you frequency of a term(word) as API result. So, you will have to get current trends from twitter API, store them locally and calculate the frequency of each term.
Simple and easy way to do this would be to query the trends/daily API which returns trending topics (by each hour) for a given day. Accumulate the terms from API result and count their frequency. I guess drawing the cloud using this list of words & frequency would be an easy task.
I hope this answers the question. But if you want something complex, you will have to monitor & process tweets using search,streaming API.
Using the twitter api you can get current trends, ie: http://apiwiki.twitter.com/Twitter-Search-API-Method%3A-trends-current
You'll probably want to have a look at the documentation at http://apiwiki.twitter.com/Twitter-API-Documentation

Resources