Voiceflow issue for Google Action/Voice - crashing after " 'suggestions' will be ignored since they are used in 'final_response' " - voice

I'm using VoiceFlow to create a simple music streaming app that plays MP3s and allows the user to stop/start/skip/replay track as per the default methods.
It works fine with Alexa but when I test it in Google Action/Voice, I get the following error right after it started playing the first file (which starts automatically per my program):
UnnecessaryField final_response.rich_response: 'suggestions' will be ignored since they are used in 'final_response'.
And the following is the debug info from Google Action:
{
"response": "Alright. Getting the test version of my test app.\nHello, welcome to My music. I'll now play my favorite My music for you. You can skip ahead to the next track by saying, Hey Google, next. Or go back to the previous one by saying, Hey Google, previous. You can also pause the music by saying, Hey Google, pause. If you want to stop the music, say Hey Google, stop.Sorry, something went wrong. When you're ready, give it another try.",
"expectUserResponse": false,
"conversationToken": "Evwo5dm...",
"audioResponse": "//NExG...",
"ssmlMarkList": [],
"debugInfo": {
"assistantToAgentDebug": {
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=b1eb03ce32b1' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6ImVlNGRiZDA2YzA2NjgzkZGNhNmI4OGMzZTQ3M2I2OTE1YjkiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJhdWQiOiJiYXJvcXVlbXVzaWMtNDQ3OWQiLCJuYmYiOjE1NzA2NTE2OTQsImlhdCI6MTU3MDY1MTk5NCwiZXhwIjoxNTcwNjUyMTE0LCJqdGkiOiI4DK0Q4nY_KgaP57R_U0BLbsXzsraHwEiwzrJOTtu-VBlypZ0ujPph6WWGzlmfRhek09QjaCTVAR9gQOzrfkvnMwrmLH9CskopziyVRK3Yj99IArZp5bht0uvma78p-mxYwVfmGpDLt2A2nVpx3P-XMaswq-b3WA-s3-Y3sNpZO5MPkSmATXWxFdoA9xnKpWrGSbJ3TksylVygrhFPzw_gQ2bQ' -A Google-ActionsOnGoogle/1.0 -X POST -d '{\"user\":{\"locale\":\"en-US\",\"lastSeen\":\"2019-10-09T20:13:04Z\",\"userStorage\":\"{\\\"data\\\":{\\\"userId\\\":\\\"2cd086-9896-4ddf-a59c-a2d9a3229\\\"}}\",\"userVerificationStatus\":\"VERIFIED\"},\"conversation\":{\"conversationId\":\"ABwppHFmu4qLfj_d0VPLm1k4MJOFKTmjb8NQJ6iH5TOJBilnPK5Gf1z-hvlIMqB8XVpP\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to my test app\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.ACCOUNT_LINKING\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}'",
"assistantToAgentJson": "{\"user\":{\"locale\":\"en-US\",\"lastSeen\":\"2019-10-09T20:13:04Z\",\"userStorage\":\"{\\\"data\\\":{\\\"userId\\\":\\\"2c60d086-6-4ddf-a59c-a21c3229\\\"}}\",\"userVerificationStatus\":\"VERIFIED\"},\"conversation\":{\"conversationId\":\"ABwppHxEQQ7pSIgVULfj_d0VPLm1kTmjb8NQJ6iHHDWaxHI5TOJBilnlIMqB8XVpP\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"VOICE\",\"query\":\"Talk to my test app\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.ACCOUNT_LINKING\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]},\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}",
"delegatedRequest": {
"delegatedRequest": ""
}
},
"agentToAssistantDebug": {
"agentToAssistantJson": "{\n \"conversationToken\": \"[]\",\n \"finalResponse\": {\n \"richResponse\": {\n \"items\": [{\n \"simpleResponse\": {\n \"textToSpeech\": \"\\u003cspeak\\u003eHello, welcome to My music. I\\u0027ll now play my favorite My music for you. You can skip ahead to the next track by saying, Hey Google, next. Or go back to the previous one by saying, Hey Google, previous. You can also pause the music by saying, Hey Google, pause. If you want to stop the music, say Hey Google, stop.\\u003c/speak\\u003e\"\n }\n }, {\n \"mediaResponse\": {\n \"mediaType\": \"AUDIO\",\n \"mediaObjects\": [{\n \"name\": \"Albinoni - Adagio in G minor\",\n \"contentUrl\": \"https://s3.amazonaws.com/com.getstoryflow.audio.production/file.mp3\"\n }]\n }\n }],\n \"suggestions\": [{\n \"title\": \"exit\"\n }]\n }\n },\n \"responseMetadata\": {\n \"status\": {\n \"message\": \"Success (200)\"\n },\n \"queryMatchInfo\": {\n \"queryMatched\": true,\n \"intent\": \"4eaa-eae8-438a-880a-cc9e27a936fc\"\n }\n }\n}",
"delegatedResponse": {
"delegatedResponse": ""
}
},
"sharedDebugInfoList": [
{
"name": "ResponseValidation",
"debugInfo": "",
"subDebugEntryList": [
{
"name": "UnnecessaryField",
"debugInfo": "final_response.rich_response: 'suggestions' will be ignored since they are used in 'final_response'.",
"subDebugEntryList": []
}
]
}
],
"conversationBuilderExecutionEventsList": []
},
"visualResponse": {
"visualElementsList": [
{
"simulatorMediaResponse": {
"mediaResponse": {
"mediaType": 1,
"mediaObjectsList": [
{
"name": "File",
"description": "",
"contentUrl": "https://s3.amazonaws.com/com.getstoryflow.audio.production/file.mp3"
}
],
"startOffsetMs": 0
},
"mediaSessionId": "324532373494015"
}
},
{
"displayText": {
"content": "Alright. Getting the test version of my test app."
}
},
{
"displayText": {
"content": "Hello, welcome to My music. I'll now play my favorite My music for you. You can skip ahead to the next track by saying, Hey Google, next. Or go back to the previous one by saying, Hey Google, previous. You can also pause the music by saying, Hey Google, pause. If you want to stop the music, say Hey Google, stop."
}
},
{
"displayText": {
"content": "Sorry, something went wrong. When you're ready, give it another try."
}
}
],
"suggestionsList": [],
"agentLogoUrl": "https://www.gstatic.com/voice/opa/partner_icons/generic_3p_avatar.png",
"agentStyle": {
"primaryColor": "",
"fontFamily": "",
"borderRadius": 0,
"backgroundColor": "",
"backgroundImageUrl": ""
}
},
"clientError": 0,
"is3pResponse": true,
"clientOperationList": [
{
"operationType": 4,
"startIndicatorPayLoad": {
"status": 1
}
},
{
"operationType": 7,
"exitIndicatorPayLoad": {
"status": 1
}
}
],
"projectName": "",
"renderedHtml": ""
}
I should be able to in interact with the app by pausing, stopping, skipping or go back to previous track.
Instead the error happens that crashes out of the app.
Obviously I have no direct access to the code as VoiceFlow is managing that as a black box.
But does any one know how I can fix it?

Related

Suddenly getting intermittent error "Failed to parse Dialogflow response into AppResponse : null"

I've suddenly recently started seeing errors like "Failed to parse Dialogflow response into AppResponse : null" on Actions On Google/Dialogflow projects that previously worked ok.
Even now, the error seems intermittent, and happens despite the response from my Firebase function (fulfilment webhook) being identical. In other words, sometimes I get the above error, and other times it works, but the response from my Firebase function is identical in both cases.
As an example, here is a response that sometimes, but not always, causes the error I mention above:
{ "status": 200, "headers": { "content-type":
"application/json;charset=utf-8" }, "body": { "payload": { "google": {
"expectUserResponse": true, "systemIntent": { "intent":
"actions.intent.OPTION", "data": { "#type":
"type.googleapis.com/google.actions.v2.OptionValueSpec", "listSelect":
{ "title": "Please select one option:", "items": [ { "optionInfo": {
"key": "Yes", "synonyms": [ [ "Go", "Lets go", "Let's go", "Get
started", "Yes", "I am ready", "Start the survey", "Start", "1", "One"
] ] }, "description": " Start a new diary entry", "title": "1. Let's
Go" }, { "optionInfo": { "key": "Stop", "synonyms": [ [ "No", "Don't
continue", "No thanks", "Stop", "Stop the survey", "2", "Two" ] ] },
"description": " Don't make a new diary entry", "title": "1. Stop" } ]
} } }, "richResponse": { "items": [ { "simpleResponse": {
"textToSpeech": "Hi. It’s
nice to have you here and we look forward to discussing the food and
drink you give your baby.Let us know each time
you give them something to eat or drink.When you
are ready to start, say Let’s go.", "displayText":
"Hi. \n \nLet us know each time you give them something to eat or
drink. \n \nWhen you are ready to start, select Let’s go." } } ] },
"userStorage":
"{\"data\":{\"userId\":\"bb46f3f9-e522-2da0-7b3c-302a615d28e4\",\"unicomId\":\"danone2\"}}"
} } } }
So in the Firebase logs I can see the above being returned in all cases, but sometimes Google Assistant fails, and the Google Cloud logs show the "Failed to parse Dialogflow response into AppResponse : null" error, and other times, with the identical JSON returned by Firebase, it happily works.
I'm at a slight loss as to where to look further, if anyone has any pointers that'd be much appreciated, thanks!
I had a same problem. In my case, systemIntent in my response json was null and it caused this error. It worked fine before but recently It failed. I hope it helps someone.
Just to close this issue - turns out this was my fault, and the JSON response was contructed wrongly (the "synonyms" were enclosed in duplicate [[ and ]]). I believe Dialogflow was suddenly flagging this as an error, when previously it was more forgiving.
Anyway - fixing the JSON response has fixed the issue.

PayPal Payouts (Sandbox) response MALFORMED_REQUEST_ERROR

I'm using PayPal Payouts in my project. I found an issue when creating a Payout. The request is made on Postman (Windows), and Node.js using axios (Firebase Functions).
I received a response MALFORMED_REQUEST_ERROR on this request:
POST https://api.sandbox.paypal.com/v1/payments/payouts
{
"sender_payout_header": {
"sender_batch_id": "2018083001",
"email_subject": "You have a payout!"
},
"items": [
{
"recipient_type": "EMAIL",
"amount": {
"value": "9.87",
"currency": "USD"
},
"note": "Thanks for your patronage!",
"sender_item_id": "2018083001001",
"receiver": "receiver#example.com"
}
]
}
I tried changing items[0]/note to a test value: POSPYO001, the API will response 201 Created as expected.
Why the sandbox is working only on positive/negative test values ? Is this a limitation, or a bug ?
P.S. Sorry for my english.

Wrong intent in Alexa Skill Request when using the simulator

I set up my intents using this intent schema:
{
"intents": [
{
"intent": "StartIntend"
},
{
"intent": "AMAZON.YesIntent"
},
{
"intent": "AMAZON.NoIntent"
}
]
}
My sample utterances look like this (it's german):
StartIntend Hallo
StartIntend Moin
StartIntend Guten Tag
Why does the Amazon Developer Console generate the following request, when I use the utterance "Yes" or "Ja"?
{
"session": {
"sessionId": "SessionId...",
"application": {
"applicationId": "amzn1.ask.skill...."
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account...."
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId...",
"locale": "de-DE",
"timestamp": "2017-02-17T21:07:59Z",
"intent": {
"name": "StartIntend",
"slots": {}
}
},
"version": "1.0"
}
Whatever I enter, it always is using the intend StartIntend.
Why is that? What have I forgotten / what have I done wrong?
The schema and utterance look correct.
I tried duplicating what you are seeing by performing the following steps:
Copied them as-is into a new skill on my account
Selected the North America region on the Configuration page.
Set the lambda to point to an existing lambda that I have. For testing purposes, I just need a valid ARN. I'm going to ignore the response anyways.
Then entered "Yes" into the service simulator
It indeed sent the Lambda the AMAZON.YesIntent.
So I conclude that there's nothing with the data you posted.
I tried entering Ja which resulted in the StartIntend, but I guess I would expect that since Ja is not "Yes" in North America.
Have you set the region to Europe, and entered a Lambda for the Europe region?
I talked about it with the Amazon Support. After some experiments it turned out, you have to write "ja" in lowercase. It seems to be a bug in the simulator itself.
When creating the skill in the Alexa Skills Kit, you need to choose the correct language i.e. German, see screenshot below.
Everything else seems to be correct.

Bing Spell Check API - ignoring Gibberish

Its seems like Bing Spell Check API does not work as I excepted.
A lot of mistakes are ignored...
For example:
"lets go to the see and then to gfgdf." response: "flaggedTokens": []
"lets blhblh to the sea" response: "flaggedTokens": []
Where: "lets go to the see" response:
{
"flaggedTokens": [
{
...
"suggestions": [
{
"suggestion": "let's",
}
]
},
{
...
"suggestions": [
{
"suggestion": "sea",
}
]
}
],
"_type": "SpellCheck"
}
Can I do something to get more reliable results?
Thanks
It's worst on my side...
I always got the same result:
{
"_type": "SpellCheck",
"flaggedTokens": []
}
If somebody happens to kown how to prevent that I would like to know.
I guess that's maybe about migrating to Azure. Maybe thoses endpoints are not functionning correctly now.

Watson conversation API returns empty output for yes and no input

I've set up a very simple script and part of it requires a yes or no response from the user.
When I test the script through the script builder at ibmwatsonconversation.com the script works fine.
But when I'm testing it through Postman making HTTP POST requests when I get to the part that requires a yes or no answer the output node is always
"output": {
"log_messages": [],
"text": [],
"nodes_visited": [
"node_25_1480610375023"
]},
The previous two nodes in the conversation work fine.
I have set up intents for yes and no, see images below:
The dialog is as follows:
Here's the chain of requests / responses:
{"input": {"text": "hello"}}
"output": {"log_messages": [],"text": ["Welcome to the KMBC IT help desk.How can I help you?"],"nodes_visited": ["node_1_1480509441272"]},
then
{"input": {"text": "my laptop is broken"}}
"output": {
"log_messages": [],
"text": [
"I'm sorry to hear that your laptop isn't working. \n\nI need you to check a couple of things for me, is that ok?"
],
"nodes_visited": [
"node_3_1480509642420",
"node_19_1480518011225"
]},
finally
{"input": {"text": "yes"}}
"output": {
"log_messages": [],
"text": [],
"nodes_visited": [
"node_25_1480610375023"
]},
Works fine inside the "Try it out" panel within the workspace:
Full JSON request / response:
{"input": {"text": "hello"}}
{"intents": [{"intent": "greetings","confidence": 1}],"entities": [],"input": {"text": "hello"},"output": {"log_messages": [],"text": ["Welcome to the KMBC IT help desk. How can I help you?"],"nodes_visited": ["node_1_1480509441272"]},"context": {"conversation_id": "4b5b1858-ae4e-4907-a3ab-c49abf601fd3","system": {
"dialog_stack": [
{
"dialog_node": "root"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1
}}}
{"input": {"text": "laptop broken"}}
{"intents": [{"intent": "complaint","confidence": 0.989692384334575}],"entities": [
{"entity": "hardware",
"location": [
0,
6
],
"value": "laptop"
}],"input": {"text": "laptop broken"},"output": {"log_messages": [],"text": ["I'm sorry to hear that your laptop isn't working. \n\nI need you to check a couple of things for me, is that ok?"],"nodes_visited": ["node_3_1480509642420",
"node_19_1480518011225"]},"context": {
"conversation_id": "b53dff12-9252-4b7e-abe8-7b45f561d394",
"system": {"dialog_stack": [{"dialog_node": "node_19_1480518011225"}],
"dialog_turn_counter": 1,
"dialog_request_counter": 1}}}
{"input": {"text": "yes"}}
{"intents": [{"intent": "yes","confidence": 1}],"entities": [],"input": {"text": "yes"},"output": {"log_messages": [],"text": [],"nodes_visited": ["node_25_1480610375023"]},"context": {"conversation_id": "b9ddc5b0-5f3c-423f-9bbe-5a1ef013c175","system": {"dialog_stack": [{"dialog_node": "root"}],"dialog_turn_counter": 1,"dialog_request_counter": 1}}}
Based on the full JSON request/response, your issue is here (actually it's also in the previous call, but that works by concidence):
{"input": {"text": "yes"}}
Conversation is stateless, so you need to send back in the context object you received previously. Otherwise the system doesn't know where to continue. The following request object should look like this:
{
"input": {"text": "yes"},
"context": {
"conversation_id": "b53dff12-9252-4b7e-abe8-7b45f561d394",
"system": {
"dialog_stack": [{"dialog_node": "node_19_1480518011225"}],
"dialog_turn_counter": 1,
"dialog_request_counter": 1
}
}
I would recommend to use the Watson Developer Cloud SDK to manage this for you.
https://github.com/watson-developer-cloud
Have you actually created a yes and no intent ?
There is a lot of debate about the best process to handle yes and no responses. But I have found that by creating a yes and no intent, with example "yes" and "No" responses works well.
Your example questions for these intents could include responses like "ok", "yess", "on no","yes please" etc.

Resources