Alexa Skills Kit (ASK) and Utterances - alexa-skills-kit

I am developing a simple, custom skill for Alexa. I have it up and running, and hosting the handler on AWS Lambda. It's working fine except...
In the test UI, if I enter a valid utterance, e.g., help, cancel, swim, run (two custom utterances), everything works well; however, if I enter a nonsense utterance, e.g., dsfhfdsjhf, the Alexa service always maps the nonsense to the first valid intent in the intents schema.
In my lambda code, I have a handler for handling unknown intents; however, the intent is never unknown. Is this an artifact of the test interface? Something else happening?
Thanks,
John

Based on the inclusion of an unhandled intent in your approach, it sounds like you are using the Alexa Skills Kit SDK for Node.js. Your issue is not an artifact of the test interface. Yes, something else is happening.
While not yet acknowledged by amazon, this is a recognized issue in the SDK by a number of folks. See this open issue. Speaking from personal experience to the suggestion above, it doesn't matter if you use real words or gibberish, the unhandled intent is never called. Until this is fixed, my suggestion would be to build a handler that is a high level prompt for your skill, and reiterates for the user the valid options they have. Position it to be the catch-all. Hopefully we will see better maintenance of this SDK moving forward.

Instead of typing dsfhfdsjhf (which is not pronounceable in any language Alexa knows), what happens if your utterance is boogie or shake?
In a real-world scenario, I don't think Alexa would ever pass dsfhfdsjhf, so it may be difficult to plan exactly what the behavior would be.

So you'd like to pipe all garbage inputs to a single intent. You're in luck. Here's a few things you should know before proceeding.
In Node.js the unhandled handler is fired within a MODE if the intent returned by the Alexa voice service is not available within the given MODE.
An example MODE would be confirmation mode. Of the many intents that are available yes and no are the only intents that are accepted.
var ConfirmationHandlers = Alexa.CreateStateHandler(states.CONFIRMATIONMODE, {
'YesIntent': function () {
this.handler.state = states.CLOSINGCOSTSMODE;
message = ` So you will be buying this house. Great! `;
reprompt = `Please carry on with the other intents found in the house buyer skill. `;
this.emit(':ask', message, reprompt);
},
'NoIntent': function () {
this.handler.state = states.GENERALSEARCHMODE;
message = ` So you won't be buying this house. That's Ok, Continue searching for your dream house in the House buyer skill. !`;
reprompt = `Continue searching for your dream house in the House buyer skill.`;
this.emit(':ask', message, reprompt);
},
'Unhandled': function() {
console.log("UNHANDLED");
var reprompt = ` All other intents are disabled at this moment. Would you like to buy this house Yes or No? `;
this.emit(':ask', reprompt, reprompt);
}
});
However, before reaching the lambda function the Alexa Voice Service must interpret your utterance and map it to one of the available intents. If your utterance is garbage and does not map to any specific intent it is currently being mapped to the first intent.
Solution: If you would like to add a garbage intent this is something that should be handled by the intent schema not by the unhandled intent. To add a garbage intent you can follow the instructions in this amazon article.
https://developer.amazon.com/blogs/post/Tx3IHSFQSUF3RQP/Why-a-Custom-Slot-is-the-Literal-Solution
Scenario 3: I just want everything. Using custom slot types for
grammar as described above typically fulfills this desire and enables
you to improve accuracy through NLP training. If you still just want
everything, you can create a custom slot called something like
“CatchAll” and a corresponding intent and utterance: CatchAllIntent
{CatchAll}. If you use the same training data that you would have used
for LITERAL, you’ll get the same results. People typically find that
adding a little more scenario specific training data improves
accuracy.
If you’re still not getting the results, trying setting the CatchAll
values to around twenty 2 to 8 word random phrases (from a random word
generator – be really random). When the user says something that
matches your other utterances, those intents will still be sent. When
it doesn’t match any of those, it will fall to the CatchAll slot. If
you go this route, you’re going to lose accuracy because you’re not
taking full advantage of Alexa’s NLP so you’ll need to test heavily.
Any input that is not mapped to one of your more specific intents, like YES or NO, will very likely map to this CatchAll intent.

Related

question about getting started with the Here API

I am just getting started and thought I might see if anyone knew a good place to start...
I want to do a filtered list of Here Places in a mobile app. Every time I think I've found the right path, like Here Places, it appears deprecated.
Where is the right place to start?
You can check GS7 services in Here Here Maps.
Please find below details for your use-case scenario.
Services Overview
Discover provides users the ability to find a known place or address (partial or complete), as well as discover an unknown place. The latter requires information to help the end-user make a decision whether or not to visit. The expectation is that multiple items may be returned and the end-user will select the most appropriate.
Autosuggest provides term and query suggestions as the end-user types. Spell correction is included.
Geocode returns the geocoordinates for a single requested address. (NOTE: If the query is ambiguous, the API may return multiple items.)
Autocomplete provides completion of the entered keystrokes to the valid addresses. No spell correction is included.
Browse enables end-users to slice and dice HERE Map Content through various filters.
Lookup by ID finds one result based on its unique location ID.
Reverse Geocode returns the nearest address to known geo-coordinates.
Endpoint URLs
The following are the current endpoint URLs for submitting API requests:
autosuggest - https://autosuggest.search.hereapi.com/v1/autosuggest
search - https://discover.search.hereapi.com/v1/discover
geocode - https://geocode.search.hereapi.com/v1/geocode
autocomplete - https://autocomplete.search.hereapi.com/v1/autocomplete
browse - https://browse.search.hereapi.com/v1/browse
lookup by ID - https://lookup.search.hereapi.com/v1/lookup
revgeocode - https://revgeocode.search.hereapi.com/v1/revgeocode
Please check the following link for more information.
If you want to start with a mobile app, I would recommend the HERE SDK for Flutter. There is a good staring point to search for HERE places.
Even better, there is also a free app with all the source code you need. It's open source, so you can use it for your own projects. Just try it out and see if it gives you what you need.
Searching for places is not requiring much code:
_searchEngine.searchByText(query, searchOptions, (SearchError? searchError, List<Place>? list) async {
if (searchError != null) {
return;
}
// If error is null, list is guaranteed to be not empty.
for (Place searchResult in list) {
...
}
});
This code is written in Dart, so you can compile and run it on iOS and Android devices.

Reason for an unexpcted match to an intent in Watson Assistant

I have defined an intent in Watson Assistant using the following training examples:
adieu
au revoir
bye
bye now
ciao
cu
cya
exit
farewell
good bye
have a nice day
I'm leaving
later
quit
see you
so long
stop
we are done
A user inputs the word "again". Watson returns a match to this intent with a confidence level of about .9
The word "again" does appear in a training example for a completely different intent, namely "I'm looking forward to working with you again! :)". It does not appear in any other training example.
What is the reasoning used by Watson Assistant to arrive at this match and with such a high level of confidence?
There is a whole load of factors that determine why an intent is picked over the other.
Intents do not work properly if you have <= 2 intents.
Any entities you have created that are referenced in the example questions can also impact what is picked.
Contextual entities will also add weight to the POS of those entities.
Number of intents and how frequently the word is used across those intents can also impact the scoring.
Watson Assistant always tries to get meaning from the term where it can.
When trying to determine why it picked one intent over another, you need to look at both. The intent you mention may not even be the second one picked.
With just one intent shown above it's hard to say the 'Why', so this is just an educated guess as to what may be happening.
"again" is a single word and by itself has no context to determine an intent. The closest in the list would be "later".
It couldn't find any meaning whatsoever in a single word, so looked at the intent with the most single word examples, as possible reason to pick it.
That aside, you should try not to answer real 1-2 keyword based questions. There is almost never any context that a person could answer, so it's unlikely WA will be able to either.

What will be the type of user in case of SCRUM story for an API?

I have two queries related to SCRUM. They are as follows:
I have read that the format of SCRUM story is "As a < type of user >, I want < some goal > so that < some reason >". I have to write a story for an API. This API will send an email with a link to validate the email address of the user. What will be the type of user here? Will it be the user logged in?
Do subtasks have story format similar to a story or it can be a normal description?
The trouble you are encountering is likely that you are starting from a determined implementation and then trying to work backwards to the need (unless your product is an API that your users leverage, in which case I think that answers your questions).
When we approach it from a user need, we'll usually end up with more of a problem statement, like
"As a vacationer, I'd like the site to calculate the best route across
all types of transportation for me so that I don't have to run many
searches to figure it out myself."
One of the pieces of delivering on this need will be creating the API calls if your application architecture calls for that. Then "add API method for aggregated call" may be a task under that user story.
You will have cases where all a particular story needs is API work, and that's fine, but it won't come out in the user story. For example, let's say we did the about user story but limited it to planes and trains for the first start, then we created another story that reads:
"As a vacationer in the US, I want my trip planner to factor in buses
so that I can make use of bus tours in my vacation."
Now, maybe the only task in there is to create a some API changes to include the bus routes in the search, but that doesn't cause a problem with your user stories because we started back at the user's problem statement in the beginning instead of starting at the desired implementation and working backward.
Let's start clarifying some concepts first.
Scrum is not an acronym so is written as Scrum (proper name). Then, there is nothing called "Scrum Stories". What you are referring to is called: user story. User stories were wide used in the Chrysler C3 project were eXtreme Programming was developed. Furthermore, you are referring to a particular template which was popularized by Mike Cohn known as canonical form. So it's ok to express your Product Backlog Item as user stories for an API. But take into account that you can use this template, you can use user stories or you can write the Product Backlog Item the way has more sense and value to you. In your case, which is the persona, machine or service which will be used the API?
About your second question. The Scrum Guide just says you should decompose your Sprint Planning in unit of work of 1 day or less. Normally, the implementation is to create this unit of work and call them task which are the work necessary to carry out the user story. The way the are written is open too but is not quite common to write them in the canonical form. So you can write it as an ID, title and a description.

Trying to call another intent from a function in Amazon Alexa

I am looking to create a simple coffee ordering skill for the office. I am new to node and amazon alexa. I am using the alexa-app package. I want Alexa to respond to my response with a different question or separate intent depending on my request. What is the best way to do about this? I am having difficulty seeing how to trigger a new intent.
Example conversation flow:
Me: Alexa, start office Assitant
Alexa: How can I help?
Me: I would like order some coffees. (or any other services)
New intent started based on request
Alexa: Great, what would you like?
In the response to "I would like to order some coffees", you can send a response to the Alexa Cloud Service specifying shouldEndSession as false. See the documentation on the response object here. This will make the Echo continue to listen for a second user intent.
You can have a separate utterance which will map to an intent to order a specific sort of coffee (maybe using the custom slot type syntax):
SpecificCoffeeIntent I would like a {CoffeeType}, please
Note that you don't call this intent directly - when the user says "I would like a mocha please" in response to Alexa saying "Great, what would you like?", you'll be sent a SpecificCoffeeIntent. Your code can then process the intent to order coffee.

How can I get Amazon echo to respond to "preheat the car" or "what's the car battery status"? (they get hijacked)

I'm trying to create some skills for my Echo (for my own use, I'm not concerned about the invocation names not getting through review). I've set my invocation name as "the car" (I also tried "car"). I wanted to be able to ask what my battery status is and order Alexa to pre-heat the car (a Renault ZOE).
It seems no matter what I put in for my utterances, I always get the same responses:
Anything with "battery" in gets "I don't have a battery"
Anything with "heat" in gets "You have no smarthome devices, blah blah"
It seems like the words "battery" and "heat" result in things never matching my skill (even when I said the invocation name).
Is there anything I can do so that it will route actions along the lines of the above to my skill?
Edit: Today I get different results trying "preheat the car".. I just get a weird tone. It never calls my skill, nor shows anything in the Home section of the app. What does this tone mean?
Video here: https://twitter.com/DanTup/status/804615557605654528
With help from Reddit I managed to get this working reasonably well. the car was a bad invocation name and I wasn't following the documented way for invoking skills (joining words etc. are fairly restrictive).
I'm now using my car as the invocation name and can do the following:
Alexa tell my car to preheat
Alexa ask my car for battery
As of Dec 2017, it's still not possible to have completely custom phrases with Alexa. Google Assistant/Home does support this however, via shortcuts.

Resources