Amazon Alexa Skills Fallback intent not working - alexa-skills-kit

I'm reading the docs for Alexa Skills and there seems to be a fallback intent
https://developer.amazon.com/docs/custom-skills/standard-built-in-intents.html#fallback
I've turned it on in my app (added to the list of intents), but when I enter an unknown command in the test area (either by voice or by typing) all I'm getting is an annoying error sound and no request is sent to my server. I was expecting to receive an Amazon.FallbackIntent request on my server.

UPDATE: more complete answer here: Why does the Fallback Intent not get called if you say ONE random word?
FallbackIntent is designed to catch out-of-domain requests when you're in your skill session. From what you describe it sounds that you're typing garbage in the simulator before opening your skill.
In order to test it, open your skill in the simulator ("alexa open xxxxx"), respond from the backend without closing the session and then try entering/saying garbage.

Related

Any way to detect what device a DialogFlow request came from?

So, the basic idea is this:
I make a request to an assistant app to play some audio.
In general, these requests will come from a Google Home device (I know, they could come from a mobile phone etc, but I'm only worried about Home devices right now).
I can easily send a text response back and have it read.
But what I'd rather do it stream an audio file to the source device via standard Chromecasting stuff.
If I make my request something like
"Hey Google, ask {my app} to read me the instructions for {blah} in the {living room}" then it's no problem to pick out "living room" as the target device and send the audio there. That works perfect.
But if the user says
"Hey Google, ask {my app} to read me the instructions for {blah}"
What i'd like is to be able to send the audio to the device the request came from.
I found this question:
Detect request coming from google home using dialogflow
which is close but not exactly what I'm after.
EDIT: I also found the "MEDIA RESPONSE" info, but it looks like that would only be good for a single media response. In my case, I may have several audio clips that need to play back to back, so a single media response wouldn't work. (at least, I don't see a way it would work from what I know).
Is this even possible?
We currently don't have the ability to treat the Home like a Chromecast through an Action.
However, as you note, we do have the ability to use the [Media Response][1]. As you note, it does only send one audio at a time, however, when it is completed Dialogflow will be called with an Event of actions_intent_MEDIA_STATUS which you can create an Intent to capture. You can then send another Media Response with the next song in the playlist. (Or do whatever else you want as part of the conversation.)

Are the Smart Home API Error Messages supposed to make Alexa respond with more usefully information?

I've nearly finished my lambda service for my smart home skill, and everything works great. The Echo is receiving my confirmations and correctly relaying their information. I'm now trying to build in error handling.
From the SHS API reference, there are a bunch of error messages listed that correspond to different circumstances. Are these errors supposed to change what Alexa says? Regardless of which one, if any, that I use Alexa just responds that the command doesn't work on that device. Right now I'm literally just using callback(err) and return the copy and pasted object from the API reference and still Alexa responds with the generic error.
It's easy to put in a bunch of constants to define error returns. It's harder to wire all of that into a firmware patch of a hardware device. Also, they only release an update to the SDK a few times a year. While they patch the hardware every couple of weeks.
Given that, I suspect that they put those error returns into the SDK to meet with a ship date with the SDK. More as placeholders than specific functionality. Over time, and if there is increased adoption of home skills, they will roll out updates to the hardware device that will take advantage of those returns.
My advice would be to use them. But not to expect there to be a difference right now. And don't mention differences in your documentation. If there is another place you can surface diagnostic information, you might want to do that so your customers can fix their problems.

Automated system testing for chromecast receiver application

I am wondering if there is a good way of making automated system testing for a Chromecast receiver application?
If you open the application URL in a Chrome browser, the cast_receiver library cannot find the websocket connection on:
ws://localhost:8008/v2/ipc
Since this handles the communication between the app and the Chromecast hardware, I am thinking of something like a Node.js websocket server that can talk to the chromecast receiver app. Is there such a system, or do anyone know if there are plans of google releasing something for this kind of testing?
Also, would there be other problems related to the difference between the chromecast browser and chrome browser? As I understand, the chromecast browser is just a subset of chrome, which makes me think it should work.
No, there is no easy way to do this.
DISCLAIMER: I haven't tried any of what I'm about to suggest. It's also probably a terribly idea as Google could change the protocol any time and in any fashion they desire since it isn't a public thing.
BIG DISCLAIMER: You may be in violation of the ToS by doing this as Section 3.2 (Developer Policies) states that you "may not ... develop a standalone technology ... any functionality of any Google Cast Receiver". Possibly, you'd be making a standalone piece of technology that replicated the IPC functionality. But I don't know. I'm not a lawyer.
If you want to go and do this, I'd suggest making a copy of the Google Cast Receiver SDK (www.gstatic.com/cast/sdk/libs/receiver/2.0.0/cast_receiver.js as of April 28, 2015) and altering it so that it logs out the messages that are being sent and received.
Luckily, it appears that we have logging messages to help us find the relevant code.
The receiving method has the string "Received message". I would guess that "a.message" is what is being received.
The sending method has the string "IPC message sent". I would guess that "a" is what is being sent.
Once you've instrumented your copy of the code, you need to publish it somewhere that your receiver app can see it and then you need to edit your receiver app to point to your new and improved SDK. Please please please make sure that you do this on a non-published app for testing purposes only.
Once that is done, you need to find some way to get your messages out of the code and into something that you can access. You have a few options.
Fiddle around with the code more and figure out how to get the Chromecast to log out the data you want;
Store the information in an array and read it using the debugger;
Open your own socket (or websocket) and send that data to a server that you control.
From here, you can run your app, interact with it, and then have a complete record of the IPC messages that were sent and received. Armed with this, you can create your own Fake-IPC server that listens for specific messages and spits out the stuff that is in your log.

Programmatically read Adobe game variables / interact with the game / game bot

Basicly, I'm trying to read a game's chat and catch actions from the user.
Here is the image which I will explain the situation with :
1: I took a message in the chat
2: I tryed to find it in the game's memory with Cheat-Engine
3: By examining every addresses where it was found, I ended up to this one, which contains the chat formated with what seems to be html..
That part is only the bottom part of the chat. (I see the rest of it if I scroll up)
So, I asked myself how could I read game variable to interact with the game.
Another thing I'm trying to achieve is to catch the user's actions so I can display some information in a winform.
I've just read about packet sniffing, it seems interesting for what I'm trying to do.
I tryed to read packets going in and out of this app with WireShark. Every action in game was sending a few packets, but I couldn't read them as they were just a bunch of weird characters. I tryed to decrypt them using a few methods I got on WireShark's forum without success. I was asking myself, even if I could see them in Wireshark, how am I gonna do that programmatically..
There is certainly a good way to do this, as we often see bots in this game.
Considering the number of bots playing "in team", I'm pretty sure they do not use clicks, but they run something in background that sends requests.
How do you make such a bot that fight, talk, interact with players automatically?
This game is Dofus, powered by Adobe Air.
I usually program with c++ and c#, but I was wondering what's the best way to do this.
I need a kick in the right direction!
Maybe trying a tcp/ip listner control (or use tcplistner class in c#) in your c# project with the appropriate port to catch requests (& responses). Information sended could be compressed so you may want to try some standard algo.
Did you try reverse engineer the AIR app ?

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

Resources