I have a requirement to do integration between the batch transcription and LUIS wherein I will pass the transcriptions as such to LUIS and get the intent of the audio.
As far as I know we can pass the data for intent analysis to LUIS as a query which accepts only 500 characters.
So here comes the question is it possible to pass the full transcription from speech to text batch transcription API to LUIS for intent analysis or we have to feed the data in chunks to LUIS ?
If we feed the data in chunks(500 characters) how we will get the overall intent of the audio, since different utterances may lead to different top level intent.
I have done a lot of research on this reading the microsoft documentations , but could not find any answer.
Please suggest on the best possible way to achieve this scenario.
In my opinion, I don't think we can get the intent of the audio accurately if feed the data in chunks. I think we'd better to limit the length of the character to no more than 500. If it is longer than 500, just return error message(or not allow it longer than 500).
By the way, is it possible to get rid of unimportant words before sending to LUIS ?
Here is LUIS integration with speech service https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/interactive-voice-response-bot#ai-and-nlp-azure-services
We do have a Telephony channel which is currently in private preview, and as such, comes with preview terms (no SLA, etc).
Here are the details about the preview: https://github.com/microsoft/BotFramework-IVR.
Related
I am currently using the Google Places API on a free trial. I am interested in paying for the API but can't find the exact cost of the two commands that I use: google_places(), and google_place_details(). I have contacted the Google sales team and looked at the places and billing url, but I have not managed to find the answer of how much it would cost exactly to execute these two commands.
For google_places(), this is an example of a command I would execute:
google_places(search_string = "Cafeteria in Madrid, Spain", key=key)
From the places and billing url, it seems like this counts as a text search, so each time the code is executed it would cost 0,032$. Is this the case?
For google_place_details(), here is an example of the command I would execute:
google_place_details(place_id = "ChIJf_XA-F0U04kR1IPYSdTJ4so", key=key)
This command, as well giving basic place details (which cost 0,017$ according to the billing url),
gives information which counts as contact data (an extra 0,003$) and atmosphere data (an extra 0,005$). It also provides photo data (0,007$ according to the billing url), which I am not interested in but is automatically included in the results anyway. Does this mean that the cost of executing this command once is these four prices summed up?
I am interested in knowing exactly how much it would cost to execute the two commands I have listed.
probably this helps:
First of all you are billed monthly after you exceeded the 200 Euro/Dollars, which are given by google for free (as you probably described as "free plan"). So after every month you get a bill on how many requests of each function you send to google. There everything is written quite clearly including the amount and price of each "unit". then you can easily divide it.
Second option would be your Google Api Cockpit.
It tracks your requests quite precisely on different time bases. So sending your wanted commands only once on a day can give you an exact total-price.
The Cockpit is super handy for different things. If you want you can even set limits, which is probably helpful in your case too.
Here is the link to the billing monitor as well: Billing Google API Cockpit
Furthermore the description of how google charges you. Look here
best regards
I have defined an intent in Watson Assistant using the following training examples:
adieu
au revoir
bye
bye now
ciao
cu
cya
exit
farewell
good bye
have a nice day
I'm leaving
later
quit
see you
so long
stop
we are done
A user inputs the word "again". Watson returns a match to this intent with a confidence level of about .9
The word "again" does appear in a training example for a completely different intent, namely "I'm looking forward to working with you again! :)". It does not appear in any other training example.
What is the reasoning used by Watson Assistant to arrive at this match and with such a high level of confidence?
There is a whole load of factors that determine why an intent is picked over the other.
Intents do not work properly if you have <= 2 intents.
Any entities you have created that are referenced in the example questions can also impact what is picked.
Contextual entities will also add weight to the POS of those entities.
Number of intents and how frequently the word is used across those intents can also impact the scoring.
Watson Assistant always tries to get meaning from the term where it can.
When trying to determine why it picked one intent over another, you need to look at both. The intent you mention may not even be the second one picked.
With just one intent shown above it's hard to say the 'Why', so this is just an educated guess as to what may be happening.
"again" is a single word and by itself has no context to determine an intent. The closest in the list would be "later".
It couldn't find any meaning whatsoever in a single word, so looked at the intent with the most single word examples, as possible reason to pick it.
That aside, you should try not to answer real 1-2 keyword based questions. There is almost never any context that a person could answer, so it's unlikely WA will be able to either.
I want to develop one robot using watson conversation, this robot can answer some frequently asked question about my application. Here is my thinking, every question has its own intent, and the answer will be returned by response. but I have over 50 questions, that means I need to define over 50 intents, but watson conversation limited 25 intents for one workspace. Does anyone have any idea about how to resolve it? Thanks.
There are two options for you
Purchase Standard plan ($0.0025 USD/API call)Includes upto Up to 2000 Intents.Check out more on pricing here
Link similar questions together and try to reduce the questions.For example, take two questions regarding Bank withdrawals and Bank deposits could be asked in one intent as Bank transaction and then put type of transactions as entities(dialogue box condition as entities).
And besides that limit, there is another one: max 1000 API queries/month. It if fine for a proof of concept or to development phase. If you intend to go to production, you should purchase a plan. https://www.ibm.com/watson/developercloud/conversation.html
I am developing a simple, custom skill for Alexa. I have it up and running, and hosting the handler on AWS Lambda. It's working fine except...
In the test UI, if I enter a valid utterance, e.g., help, cancel, swim, run (two custom utterances), everything works well; however, if I enter a nonsense utterance, e.g., dsfhfdsjhf, the Alexa service always maps the nonsense to the first valid intent in the intents schema.
In my lambda code, I have a handler for handling unknown intents; however, the intent is never unknown. Is this an artifact of the test interface? Something else happening?
Thanks,
John
Based on the inclusion of an unhandled intent in your approach, it sounds like you are using the Alexa Skills Kit SDK for Node.js. Your issue is not an artifact of the test interface. Yes, something else is happening.
While not yet acknowledged by amazon, this is a recognized issue in the SDK by a number of folks. See this open issue. Speaking from personal experience to the suggestion above, it doesn't matter if you use real words or gibberish, the unhandled intent is never called. Until this is fixed, my suggestion would be to build a handler that is a high level prompt for your skill, and reiterates for the user the valid options they have. Position it to be the catch-all. Hopefully we will see better maintenance of this SDK moving forward.
Instead of typing dsfhfdsjhf (which is not pronounceable in any language Alexa knows), what happens if your utterance is boogie or shake?
In a real-world scenario, I don't think Alexa would ever pass dsfhfdsjhf, so it may be difficult to plan exactly what the behavior would be.
So you'd like to pipe all garbage inputs to a single intent. You're in luck. Here's a few things you should know before proceeding.
In Node.js the unhandled handler is fired within a MODE if the intent returned by the Alexa voice service is not available within the given MODE.
An example MODE would be confirmation mode. Of the many intents that are available yes and no are the only intents that are accepted.
var ConfirmationHandlers = Alexa.CreateStateHandler(states.CONFIRMATIONMODE, {
'YesIntent': function () {
this.handler.state = states.CLOSINGCOSTSMODE;
message = ` So you will be buying this house. Great! `;
reprompt = `Please carry on with the other intents found in the house buyer skill. `;
this.emit(':ask', message, reprompt);
},
'NoIntent': function () {
this.handler.state = states.GENERALSEARCHMODE;
message = ` So you won't be buying this house. That's Ok, Continue searching for your dream house in the House buyer skill. !`;
reprompt = `Continue searching for your dream house in the House buyer skill.`;
this.emit(':ask', message, reprompt);
},
'Unhandled': function() {
console.log("UNHANDLED");
var reprompt = ` All other intents are disabled at this moment. Would you like to buy this house Yes or No? `;
this.emit(':ask', reprompt, reprompt);
}
});
However, before reaching the lambda function the Alexa Voice Service must interpret your utterance and map it to one of the available intents. If your utterance is garbage and does not map to any specific intent it is currently being mapped to the first intent.
Solution: If you would like to add a garbage intent this is something that should be handled by the intent schema not by the unhandled intent. To add a garbage intent you can follow the instructions in this amazon article.
https://developer.amazon.com/blogs/post/Tx3IHSFQSUF3RQP/Why-a-Custom-Slot-is-the-Literal-Solution
Scenario 3: I just want everything. Using custom slot types for
grammar as described above typically fulfills this desire and enables
you to improve accuracy through NLP training. If you still just want
everything, you can create a custom slot called something like
“CatchAll” and a corresponding intent and utterance: CatchAllIntent
{CatchAll}. If you use the same training data that you would have used
for LITERAL, you’ll get the same results. People typically find that
adding a little more scenario specific training data improves
accuracy.
If you’re still not getting the results, trying setting the CatchAll
values to around twenty 2 to 8 word random phrases (from a random word
generator – be really random). When the user says something that
matches your other utterances, those intents will still be sent. When
it doesn’t match any of those, it will fall to the CatchAll slot. If
you go this route, you’re going to lose accuracy because you’re not
taking full advantage of Alexa’s NLP so you’ll need to test heavily.
Any input that is not mapped to one of your more specific intents, like YES or NO, will very likely map to this CatchAll intent.
I want to know if there is any api that can allow me to get the number of reviews from an url.
I know that google offers the possibility to get this number by using the placeid, but the only information I have is the url of the website of a company.
Any ideas please?
Maybe, but probably not.
Places API Text Search seems to be able to find places by their URL:
https://maps.googleapis.com/maps/api/place/textsearch/json?key=YOURKEY&query=http://www.starbucks.com/store/1014527/us/303-congress-street/303-congress-street-boston-ma-02210
However, this is not a documented feature of the API and I do not think this can be relied upon, so I'd recommend filing a feature request, to make this a supported, reliable feature.
As for the amount of reviews, you may be interested in:
Issue 3484: Add # of reviews to the Place Details Results
I've written an API like this for Reviewsmaker, but I target specific business names not URLs. See this example (I activated a key for this purpose for now):
http://reviewsmaker.com/api/google/?business=life%20made%20a%20little%20easier&api_key=4a2819f3-2874-4eee-9c46-baa7fa17971c
Or, try yourself with any business name:
http://reviewsmaker.com/api/google/?business=Toys R Us&api_key=4a2819f3-2874-4eee-9c46-baa7fa17971c
The following call would return a JSON object which shows:
{
"results":{
"business_name":"Life Made A Little Easier",
"business_address":"1702 Sheepshead Bay Rd, Brooklyn, NY 11235, USA",
"place_id":"ChIJ_xjIR2REwokRH2qEigdFCvs",
"review_count":38
},
"api":{
"author":"Ilan Patao",
"home":"www.reviewsmaker.com"
}
}
Pinging this EP using a Chronjob for example once every hour or two and return the review_count can pretty much build your own review monitoring app;
You can probably do what you're looking for if you query the Places API Text Search or the CSE (Custom Search Engine) API to lookup the URL, return back the matching name of the business associated with this URL and calling an endpoint like this one to return back the associated review count.
You can probably code this in py or PHP. Not sure how familiar you are with data parsing, but I was able to build my API based on Google's CSE API. CSE provides metadata in its results which contain the total reviews, so if you create a CSE engine and use the CSE API looking for business schemas, review schemas, etc; you can return back items and within the PageMap node there are objects with data that you need very little tweaking to do (such as string replacing, trimming) which will return back the values you're looking for.
Hope my answer helped, at least to lead you in the right direction :)