Botium partial convos are not actually sending conversations to chatbot - automated-tests

I am trying to follow the partial convo instructions here under splitting convos, but I am unable to get the partial convo to actually send messages to the chatbot. Are there additional configuration settings in botium.json that I need to adjust to enable this feature?
Consider the simple give_me_a_picture.convo.txt that is created with botium-cli init. If I create a pconvo.txt file example.pconvo.txt that looks like this:
PARTIAL_HELLO
#me
Hello, Bot!
#bot
You said: Hello, Bot!
and then I adjust give_me_a_picture.convo.txt to include the following:
give me picture
INCLUDE PARTIAL_HELLO
#me
give me a picture
#bot
Here is a picture
MEDIA http://www.botium.at/img/logo.png
The above test will technically still pass. However, if I run this with --verbose you will see that it doesn't actually send the commands from PARTIAL_HELLO (ie. "Hello, Bot!") -- it just skips to saying give me a picture -- what adjustments do I have to make such that it actually goes through the partial conversation?
Here is the --verbose output at the start of the convo where you can see the first question is give me a picture
botium-PluginConnectorContainer Botium plugin botium-connector-echo loaded +0ms
botium-connector-echo Validate called +0ms
botium-connector-echo Build called +1ms
botium-connector-echo Start called +0ms
botium-cli-run running testcase give me picture +21ms
botium-Convo give me picture/Line 5: user says {
botium-Convo "sender": "me",
botium-Convo "channel": null,
botium-Convo "messageText": "give me a picture",
botium-Convo "stepTag": "Line 5",
botium-Convo "not": false,
botium-Convo "asserters": [],
botium-Convo "logicHooks": [],
botium-Convo "userInputs": []
botium-Convo } +0ms
I can also confirm that botium did find the partial convo and transcribed it succesfully:
botium-ScriptingProvider undefined PARTIAL_HELLO ({ convoDir: 'sample/', filename: 'example.pconvo.txt' }): Line 3: #me - Hello, Bot! | Line 6: #bot - You said: Hello, Bot! +0ms

You are using the INCLUDE instruction in the header of the convo file, which is the wrong place - you have to use it within the actual conversation. To use the partial convo at the beginning of the convo, add it in the #begin section:
give me picture
#begin
INCLUDE PARTIAL_HELLO
#me
give me a picture
#bot
Here is a picture
MEDIA http://www.botium.at/img/logo.png

Related

Custom FCM notification sound for unity android

In my unity project I want play custom sound when I get firebase cloud message, not system default sound.
So after I followed other answers my message looks like,
{
"to": "some_key",
"notification": {
"title": "Title",
"android_channel_id": "2",
"body": "Body",
"sound": "custom_sound.wav"
}
}
and I placed custom_sound.wav in Asset/Plugins/Android/res/raw. When I unzip my .apk, I can find my sound file is in right location.
But it keeps playing system default sound. Even after I remove sound field. Is there any other thing should I check?
First: a quick tip when debugging. If you select "Export Project", you can open the generated Gradle project with Android Studio:
Occasionally you have to update the gradle wrapper, but it helps a ton debug things like "is my sound file in res/raw" without having to decompress your APK and poke around.
I think that the issue you're running into now is that sounds are now associated with NotificationChannels (as of Android O) rather than individual notifications, as noted by this StackOverflow post expressing a similar issue. Since this isn't exposed via the Unity SDK.
Fortunately, you can add a channel with Unity.Notifications.Android.
It should be as simple as creating a new
public AndroidNotificationChannel(string id, string title, string description, Importance importance)
with your id set to "2" (to match your sample notification above. Since this is a string, I would recommend giving this a better name :D).
Then you can call RegisterNotificationChannel with that channel you create as your parameter.
For example, to get your notification above to work, I believe you can write:
var notificationChannel = new NotificationChannel("2", "Channel 2 (working title)", "This is the 2nd channel", Importance.Default);
AndroidNotificationCenter.RegisterNotificationChannel(notificationChannel);
Let me know if this helps!
--Patrick

Custom sound push notification does not work (Flutter)

{
"to": "XXXX",
"notification": {
"title": "ASAP Alert",
"body": "Please open your app"
},
"data": {
"screen": "/Nexpage1",
"sound": "alarm",
"click_action": "FLUTTER_NOTIFICATION_CLICK"
}
}
Above is my payload for the push notification. I have insert the alarm.mp3 file inside the raw folder, however it still does not give me the alarm sound, i have try for alarm.mp3 also, is there anything wrong with the json? of it because of the code on my dart file?
Reading this it seems that it should be manage automatically (if you didn't use a notification builder) on Android but you have to specify the .mp3 extension too and put it inside notification field and not data one..
"sound": "alarm.mp3"
iOS behaves very differently under the hood but you can use a custom sound by setting the sound: field in the notification payload too. Anyway .mp3 is not a valid APN notification file format, and you need to specify also the file extention.
"sound": "filename.caf"
Follow Apple documentation in order to forge your custom sound file for your app.
mp3 is not a valid format
Preparing Custom Alert Sounds
Local and remote notifications can specify custom alert sounds to be
played when the notification is delivered. You can package the audio
data in an aiff, wav, or caf file. Because they are played by the
system-sound facility, custom sounds must be in one of the following
audio data formats:
Linear PCM
MA4 (IMA/ADPCM)
µLaw
aLaw
Place custom sound files in your app bundle or in the
Library/Sounds folder of your app’s container directory. Custom
sounds must be under 30 seconds when played. If a custom sound is
over that limit, the default system sound is played instead.
You can use the afconvert tool to convert sounds. For example, to
convert the 16-bit linear PCM system sound Submarine.aiff to IMA4
audio in a CAF file, use the following command in the Terminal app:
afconvert /System/Library/Sounds/Submarine.aiff ~/Desktop/sub.caf -d ima4 -f caff -v
For exampole to convert your mp3 file in a caf file you could type in terminal:
afconvert -f caff -d LEI16 alarm.mp3 alarm.caf
Read this doc in order to have a deep inside of all generic and specific notifciation payload fields.
UPDATE
I've tested the Android part and I can confirm that putting your .mp3 file in res/raw/ folder the sound is played as documented and expected.
That's my notification payload:
{
"to" : "my_device_token",
"collapse_key" : "type_a",
"priority" : "high",
"notification" : {
"body" : "Test Notification body for custom sound {{datestamp}}",
"title": "Custom sound alert.mp3",
"sound": "alert.mp3"
}
}
I've tested also the iOS version after converting .mp3 file to .caf file in that way:
afconvert -f caff -d LEI16 alert.mp3 alert.caf
the same json payload with the different filename works:
{
"to" : "my_device_token",
"collapse_key" : "type_a",
"priority" : "high",
"notification" : {
"body" : "Test Notification body for custom sound {{datestamp}}",
"title": "Custom sound alert.mp3",
"sound": "alert.caf"
}
}
Remember to add the file in your main bundle.
That works if the app is terminated or in background.
If you want to show an alert and play a sound when the app is in foreground you have to manage it on onMessage event like someone already have told you here, or you can use a platform-channel here to build your own notification with a Notification.Builder on Android and a UNNotificationCenter on iOS (for example).
UPDATE
This issue has been solved. See here the official comment:
Hey all 👋
As part of our roadmap (#2582) we've just shipped a complete rework of
the firebase_messaging plugin that aims to solve this and many other
issues.
If you can, please try out the dev release (see the migration guide
for upgrading and for changes) and if you have any feedback then join
in the discussion here.
Given the scope of the rework I'm going to go ahead and close this
issue in favor of trying out the latest plugin.
Thanks everyone 🤓
ShadowSheep did a good job at answering this question, but there's one thing I want to clarify for trying to get the iOS sounds to work.
You have to add the sound into XCode (which is where ShadowSheep speaks of including the asset inside of the main bundle). You can just drag and drop the audio file (in .caf or other supported format mentioned above) into the root directory (usually called Runner for Flutter) in XCode:
If you have done this and follow the setup described in the above question/answer, you should be in business.
For me I am using the flutter_local_notifications to create the notification channel.
include this function (may create multiple notification channel)
Future<void> _createNotificationChannel(String id, String name,
String description, String sound) async {
final flutterLocalNotificationsPlugin = FlutterLocalNotificationsPlugin();
var androidNotificationChannel = AndroidNotificationChannel(
id,
name,
description,
sound: RawResourceAndroidNotificationSound(sound),
playSound: true,
);
await flutterLocalNotificationsPlugin
.resolvePlatformSpecificImplementation<
AndroidFlutterLocalNotificationsPlugin>()
?.createNotificationChannel(androidNotificationChannel);}
call the function in initState: (this created 2 notification channel)
_createNotificationChannel("channel_id_1", "channel_name", "description", "alert");
_createNotificationChannel("channel_id_2", "channel_name", "description", "alarm");
Remember to save the file of alert and alarm in the res/raw in file format of .mp3.
with this payload :
{
"notification": {
"title": "My First Notification",
"body": "Hello, I'm push notification"
},
"data": {
"title": "My First Notification"
},
"android": {
"notification": {
"channel_id": "channel_id_1"
}
},
"to": "device_token"}
For the people who try to make custom sound for both android and ios good guide here:
For iOS:
To add a custom sound for iOS, add the audio clip for iOS to your
project's App_Resources/iOS/ directory (iOS only accepts .wav, .aiff,
and .caf extensions).
For Android:
First, add the audio clip to your project's
App_Resources/Android/src/main/res/raw/ directory ((Android only
accepts .wav, .mp3 and .ogg extensions).
So we can use .wav file in order to use custom sound available for both platform:
"notification": {
"body": "Test notification",
"title": "Test Test Test",
"click_action": "FLUTTER_NOTIFICATION_CLICK",
"sound": "your_custom_sound.wav"
"android_channel_id": "channel_id_youcreated",
},
'to':
"",
},
after try this solution in Android I still can't change with custom sound. You need .ogg file type to resolve this issue. After changed to .ogg file type, I can change with custom sound. Thanks.

failed to keep user on the same prompt when they enter the wrong nunber?

we'd like to keep user on the same prompt when they enter the wrong number, we have tried anything_else, and "true", "jump to", but it messed up, please take a look at the attached to reproduce it, Thanks
pleae enter "how much"
if you enter 6, it will lock the prompt(I assign 1 to 5 to different identification), this behavior is correct .
then enter 2, we will get messed up...
please import this json to reproduce it, thanks
https://drive.google.com/file/d/0B1YdUMoS4l7ub1BZdUg1c1dQeG8/view?usp=sharing
The better form is use the entitie pre-defined by IBM, #sys-number to get numbers from the user input. And you can use use with conditions and to get the number with context variable too, check the JSON example:
{
"context": {
"number": "<? #sys-number ?>"
},
"output": {
"text": {
"values": [
"Now is $hora. Sector please?"
],
"selection_policy": "sequential"
}
}
}
If user type two or 2, the entitie recognize!
You can use regex expression to obtain only the numbers you have pre-defined too!
How to active: -> Entities -> System Entities -> sys-number = ON:
Obs.: Waiting Watson TRAINNING after you active this entitie.
Example, with sys-number add in your node condition:
#sys-number:1
Check the image:
If user type the number correct:
Check the dialog if user dont type the correct number with true condition:
I did the example for you understand what I do for that:
Download the JSON for verify how to do it with REGEX here.
Download the JSON for verify how to do it with SYS-NUMBER here.
EDIT:
Refer your questions
In this case you can use regex, and use the context variable for make conditions in other node. My workspace with regex can help you with numbers. And, the variable $number you can use in the next node to verify if the user typed correctly the number.
And, the other case is to use the Jump to inside conversation. And use true if the user dont type the number correctly again.
Check my image:
Download the new workspace here.
Study more about conditions here.

Alexa Echo Dot - ASK skill problems

I'm tying to make a simple test custom Alexa Skill, but I'm stuck and I'm not sure what the problem is. Maybe someone more experienced know what I'm missing?
Invocation Name
home system
Intent Schema
{
"intents": [
{
"intent": "AMAZON.HelpIntent",
"slots": []
},
{
"intent": "TestIntent",
"slots": [
{"name": "test", "type": "AMAZON.NUMBER"}
]
}
]
}
Sample Utterances
TestIntent set state {test}
TestIntent add state
I have written my own little python server on my own self hosted server, I already have a working news flash skill on the same system. I have spend plenty of time looking at the documentation, reading tutorials and I looks like I have done what I'm supposed to do.
The result I get is this:
A LaunchRequest works, both in the Service Simulator and on the Echo. It triggers a HTTP POST with the expected JSON, and I get the expected voice reply.
But the IntentRequest only works from the Service Simulator, it never works on the Echo. I say for example "alexa home system set state eight", no requests are made to my server, the echo just makes a sound and that's all.
I have no idea how to debug this, the skill is a US skill and my Echo is in US mode. I have tried to set the endpoint in both Europe and North America. Tried different trigger words, different slots, no slots ... and I have of course checked under Settings -> History to make sure that the device understood me correctly.
Any idea what to try next? How to debug this?
I found the problem, it was a classic PEBCAK (Problem Exists Between Chair And Keyboard) problem.
I had missed that I had to be much more precise how to invoke an intent (a single sentence that contains both the trigger word and intent in one go). A example of valid and working examples are:
Alexa, ask home system to set state nine
Alexa, set state twelve using home system
Alexa, tell home system set state one
I realised this when I used the alternative 2-step invoking, and realized that it worked. It had to be the way I invoked the skill, not the backend:
Alexa, open home system
(Alexa responds, and listens for the command)
Set state to eight
(Intent triggered, Alexa responds)
The first request above is the LaunchRequest
The LaunchRequest responds with shouldEndSession: false, if not the session will end. That's maps to question(...) in my code.
There are plenty of more ways to trigger the skills, a full list see this page: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/supported-phrases-to-begin-a-conversation (scroll down to the tables)
Finally thank you u-gen for the feedback, bst was a interesting project (never tried it), guess it can be really useful if you uses a hosted solution like lambda. But thanks to the docs I found flask-ask, a project that simplified my code.
Finally, the python part of my test project if someone else like to try it out.
#!/usr/bin/env python
from flask import Flask, render_template
from flask_ask import Ask
from flask_ask import statement, question, convert_errors
app = Flask(__name__)
ask = Ask(app, '/ask/')
#app.route('/')
def hello_world():
return 'Hello, World!'
#ask.launch
def launched():
return question('Welcome to Foo')
#ask.intent('TestIntent')
def hello():
return statement('Hello, world')
#ask.session_ended
def session_ended():
return "", 200
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", threaded=True)

Can not access input text of Watson conversation

I tried to access the user's input as described here: http://www.ibm.com/watson/developercloud/doc/conversation/advanced_overview.shtml based on the car-dashboard dialog.
{
"output": {
"text": "Great choice! Playing some #genre music for you. <?input text?>"
}
}
Error:
Dialog node error
Error when updating output with output of dialog node id:node_5_1469049934217. Fix the dialog node. Node output was:{"text":"Great choice! Playing some #genre music for you. "} org.springframework.expression.spel.SpelParseException: EL1041E:(pos 6): After parsing a valid expression, there is still more data in the expression: 'text'
To access user input you can use the input object.
For example:
<? input.text ?>;
The following help page has more details under accessing inputs.

Resources