Trying to call another intent from a function in Amazon Alexa - alexa-skills-kit

I am looking to create a simple coffee ordering skill for the office. I am new to node and amazon alexa. I am using the alexa-app package. I want Alexa to respond to my response with a different question or separate intent depending on my request. What is the best way to do about this? I am having difficulty seeing how to trigger a new intent.
Example conversation flow:
Me: Alexa, start office Assitant
Alexa: How can I help?
Me: I would like order some coffees. (or any other services)
New intent started based on request
Alexa: Great, what would you like?

In the response to "I would like to order some coffees", you can send a response to the Alexa Cloud Service specifying shouldEndSession as false. See the documentation on the response object here. This will make the Echo continue to listen for a second user intent.
You can have a separate utterance which will map to an intent to order a specific sort of coffee (maybe using the custom slot type syntax):
SpecificCoffeeIntent I would like a {CoffeeType}, please
Note that you don't call this intent directly - when the user says "I would like a mocha please" in response to Alexa saying "Great, what would you like?", you'll be sent a SpecificCoffeeIntent. Your code can then process the intent to order coffee.

Related

Telegram API: How to make message history for new group users visible

I want to make message history for new group users visible. For now I have managed to find channels.togglePreHistoryHidden method in the API but it takes channel as an argument, not a group. There is no similar method for groups. Moreover, as far as I know there is no such option to hide message history for channels in telegram so i'm a bit confused. Could anyone explain me how should I do this. Code snippet in telethon would be greate.

Is it possible to change contents dynamically which Alexa skill shows without any user actions?

I want to create a Photo Frame Skill for Echo Show.
I want to change photos triggered by external server (such as Firebase and so on).
Is it possible to change it dynamically without any user actions?
I saw Notification API and Proactive Events API.
But, These show notifications to user. I don't want to show anythings to user.
I want just trigger controlled from external server to change contents.
The answer depends a lot on the type of skill (for example if it is based on Alexa Conversations or not). But you can try exploring something along this line:
Keep the token of last rendered APL document
Send an APL ExecuteCommand directive from your skill server (https://developer.amazon.com/en-US/docs/alexa/alexa-presentation-language/apl-interface.html#executecommands-directive)
You can use one of the standard APL commands, depending upon your use case. One option is SetValue command (https://developer.amazon.com/en-US/docs/alexa/alexa-presentation-language/apl-standard-commands.html#setvalue-command) to modify the background image.
I want to create a Photo Frame Skill for Echo Show
Technically, a skill can last for a maximum of 5min30sec without any user interaction. Only if you provide a prompt that last for 4min then a reprompt that last for 90sec. It could be a blank audio. But Alexa is not suited for custom skills that stays live for a long time without user interactions.

Watson conversation return different results for the same request

I create one simple conversation dialogue, and I input the same question, but Watson returns different results. You can see the attached picture, the 1st time, Watson only matches the intent, but does not return the response message. I enter the same question again, it returns predefined response.
It seems that only even request, Watson can return response message. The odd request, Watson did not return message.
Can anyone help me on this? Thanks.
The "Try it out" window is good for simple checking, but not great if you want to know what is actually happening in the background.
I recommend to the deploy a Conversation Simple test app. This will allow you to query your conversation and easily see the request/response.
To help further debug then give your nodes meaningful names. Then in the JSON check the nodes_visited section. It might look something like this:
"nodes_visited": [
"FLOW purchase dog",
"Check for Mudi"
]
In this case, the user asked to buy a dog. First node then jumped to the second node "check for mudi", and that the node it is currently completed.
In your sample image, which might be happening is this:
First user input hits the first node.
At this point, Conversation is sitting at the speech bubble in the tree.
Next user input is checked in the branch, not the top level.
Conversation after finishing the branch, only then goes back to root.

Alexa Skills Kit (ASK) and Utterances

I am developing a simple, custom skill for Alexa. I have it up and running, and hosting the handler on AWS Lambda. It's working fine except...
In the test UI, if I enter a valid utterance, e.g., help, cancel, swim, run (two custom utterances), everything works well; however, if I enter a nonsense utterance, e.g., dsfhfdsjhf, the Alexa service always maps the nonsense to the first valid intent in the intents schema.
In my lambda code, I have a handler for handling unknown intents; however, the intent is never unknown. Is this an artifact of the test interface? Something else happening?
Thanks,
John
Based on the inclusion of an unhandled intent in your approach, it sounds like you are using the Alexa Skills Kit SDK for Node.js. Your issue is not an artifact of the test interface. Yes, something else is happening.
While not yet acknowledged by amazon, this is a recognized issue in the SDK by a number of folks. See this open issue. Speaking from personal experience to the suggestion above, it doesn't matter if you use real words or gibberish, the unhandled intent is never called. Until this is fixed, my suggestion would be to build a handler that is a high level prompt for your skill, and reiterates for the user the valid options they have. Position it to be the catch-all. Hopefully we will see better maintenance of this SDK moving forward.
Instead of typing dsfhfdsjhf (which is not pronounceable in any language Alexa knows), what happens if your utterance is boogie or shake?
In a real-world scenario, I don't think Alexa would ever pass dsfhfdsjhf, so it may be difficult to plan exactly what the behavior would be.
So you'd like to pipe all garbage inputs to a single intent. You're in luck. Here's a few things you should know before proceeding.
In Node.js the unhandled handler is fired within a MODE if the intent returned by the Alexa voice service is not available within the given MODE.
An example MODE would be confirmation mode. Of the many intents that are available yes and no are the only intents that are accepted.
var ConfirmationHandlers = Alexa.CreateStateHandler(states.CONFIRMATIONMODE, {
'YesIntent': function () {
this.handler.state = states.CLOSINGCOSTSMODE;
message = ` So you will be buying this house. Great! `;
reprompt = `Please carry on with the other intents found in the house buyer skill. `;
this.emit(':ask', message, reprompt);
},
'NoIntent': function () {
this.handler.state = states.GENERALSEARCHMODE;
message = ` So you won't be buying this house. That's Ok, Continue searching for your dream house in the House buyer skill. !`;
reprompt = `Continue searching for your dream house in the House buyer skill.`;
this.emit(':ask', message, reprompt);
},
'Unhandled': function() {
console.log("UNHANDLED");
var reprompt = ` All other intents are disabled at this moment. Would you like to buy this house Yes or No? `;
this.emit(':ask', reprompt, reprompt);
}
});
However, before reaching the lambda function the Alexa Voice Service must interpret your utterance and map it to one of the available intents. If your utterance is garbage and does not map to any specific intent it is currently being mapped to the first intent.
Solution: If you would like to add a garbage intent this is something that should be handled by the intent schema not by the unhandled intent. To add a garbage intent you can follow the instructions in this amazon article.
https://developer.amazon.com/blogs/post/Tx3IHSFQSUF3RQP/Why-a-Custom-Slot-is-the-Literal-Solution
Scenario 3: I just want everything. Using custom slot types for
grammar as described above typically fulfills this desire and enables
you to improve accuracy through NLP training. If you still just want
everything, you can create a custom slot called something like
“CatchAll” and a corresponding intent and utterance: CatchAllIntent
{CatchAll}. If you use the same training data that you would have used
for LITERAL, you’ll get the same results. People typically find that
adding a little more scenario specific training data improves
accuracy.
If you’re still not getting the results, trying setting the CatchAll
values to around twenty 2 to 8 word random phrases (from a random word
generator – be really random). When the user says something that
matches your other utterances, those intents will still be sent. When
it doesn’t match any of those, it will fall to the CatchAll slot. If
you go this route, you’re going to lose accuracy because you’re not
taking full advantage of Alexa’s NLP so you’ll need to test heavily.
Any input that is not mapped to one of your more specific intents, like YES or NO, will very likely map to this CatchAll intent.

post/send updates or messages to fans(who likes website) not as user

I have simple website for auctioning.I want people who likes my website needs to be informed about any new auctioning. And sending messages or post updates to their news feed or post on their wall when they win the auction. The user can also share the story of a new auction to their friends.
For this Do I need separate canvas app ?
I went through the documentation http://csharpsdk.org/. Also I read official graph api docs.
I thought there will be lists of classes in a namespace and its use. Lists of methods, properties in a class and its uses and explanation for parameters.
I cant find such documentation for facebook c# sdk. In official docs they only have 5 articles in which 3 are [TO-DO].
I cant find what are the classes are there?
what are the purpose of
the particular class or method ?
For informing about new Auctioning and winning news what can be used ?
sending mesages
post on their news feed
post on their wall(or timeline)
I found how to post on behalf of the user. But I need to inform the fans of my website when there is a new auctioning. How to do it?
Winning an auction
About generating user actions on their walls about winning auctions and stuff, you need to look at facebook open graph
For informing about new Auctioning and winning news what can be used ?
#1 sending messages is not possible
Notifying users
For sending notification to users about new auctions, you need to use apprequests
But apprequests are only for canvas apps, apart from this I don't thing you have a chance of notifying the user, unless you store their emails id and notify them.
Hope this helps

Resources