Alexa Echo Dot - ASK skill problems - alexa-skills-kit

I'm tying to make a simple test custom Alexa Skill, but I'm stuck and I'm not sure what the problem is. Maybe someone more experienced know what I'm missing?
Invocation Name
home system
Intent Schema
{
"intents": [
{
"intent": "AMAZON.HelpIntent",
"slots": []
},
{
"intent": "TestIntent",
"slots": [
{"name": "test", "type": "AMAZON.NUMBER"}
]
}
]
}
Sample Utterances
TestIntent set state {test}
TestIntent add state
I have written my own little python server on my own self hosted server, I already have a working news flash skill on the same system. I have spend plenty of time looking at the documentation, reading tutorials and I looks like I have done what I'm supposed to do.
The result I get is this:
A LaunchRequest works, both in the Service Simulator and on the Echo. It triggers a HTTP POST with the expected JSON, and I get the expected voice reply.
But the IntentRequest only works from the Service Simulator, it never works on the Echo. I say for example "alexa home system set state eight", no requests are made to my server, the echo just makes a sound and that's all.
I have no idea how to debug this, the skill is a US skill and my Echo is in US mode. I have tried to set the endpoint in both Europe and North America. Tried different trigger words, different slots, no slots ... and I have of course checked under Settings -> History to make sure that the device understood me correctly.
Any idea what to try next? How to debug this?

I found the problem, it was a classic PEBCAK (Problem Exists Between Chair And Keyboard) problem.
I had missed that I had to be much more precise how to invoke an intent (a single sentence that contains both the trigger word and intent in one go). A example of valid and working examples are:
Alexa, ask home system to set state nine
Alexa, set state twelve using home system
Alexa, tell home system set state one
I realised this when I used the alternative 2-step invoking, and realized that it worked. It had to be the way I invoked the skill, not the backend:
Alexa, open home system
(Alexa responds, and listens for the command)
Set state to eight
(Intent triggered, Alexa responds)
The first request above is the LaunchRequest
The LaunchRequest responds with shouldEndSession: false, if not the session will end. That's maps to question(...) in my code.
There are plenty of more ways to trigger the skills, a full list see this page: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/supported-phrases-to-begin-a-conversation (scroll down to the tables)
Finally thank you u-gen for the feedback, bst was a interesting project (never tried it), guess it can be really useful if you uses a hosted solution like lambda. But thanks to the docs I found flask-ask, a project that simplified my code.
Finally, the python part of my test project if someone else like to try it out.
#!/usr/bin/env python
from flask import Flask, render_template
from flask_ask import Ask
from flask_ask import statement, question, convert_errors
app = Flask(__name__)
ask = Ask(app, '/ask/')
#app.route('/')
def hello_world():
return 'Hello, World!'
#ask.launch
def launched():
return question('Welcome to Foo')
#ask.intent('TestIntent')
def hello():
return statement('Hello, world')
#ask.session_ended
def session_ended():
return "", 200
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", threaded=True)

Related

How to create a liquidity pool on Raydium on Solana devnet?

Can anyone give me any advice on how to create an LP pool on the Solana devnet?
I planned this job for testing swaps between specific two tokens on the devnet using the Raydium protocol.
So, I need to create a swap pool on the devnet first.
To achieve this, I did it like below.
First of all, to list on the serum market, I cloned the Raydium-Dex repository on my mac and changed the Serum dex's program id from the mainnet to the devnet, and I success registered on the devnet serum. (Custom token with SOL pairs)
As a result, I got a serum market program id.
After that, I cloned the Raydium-frontend repository to create a liquidity pool. And modified wellknownToken.config.ts so that my custom tokens could be possible to create a new pool.
Finally, I could access the create pool UI from the localhost web UI.
I clicked Initialize Liquidity Pool button on the UI and got a Toast message Create a new pool Transaction Sent apparently.
However, It is not worked well. Because I can not find the transaction hash on the Solscan website.
I tracked the button's click event codes and I figured out one thing.
One of the result value of Liquidity.makeCreatePoolTransaction function has a null value, especially, feePayer.
const { transaction: sdkTransaction1, signers: sdkSigners1 } = Liquidity.makeCreatePoolTransaction({
poolKeys: sdkAssociatedPoolKeys,
userKeys: { payer: owner }
})
const testTx = await loadTransaction({ transaction: sdkTransaction1, signers: sdkSigners1 })
console.log('feepayer', testTx.feePayer?.toBase58()) // null
I felt this is not the preferred (good) way to create a swap pool on the Solana devnet, but I can not find a better way to achieve this task.
What am I missing? or What am I should read or learn?
please give me some advice on how to do it to make it works.
Thanks.
It looks like the transaction created with Liquidity.makeCreatePoolTransaction hasn't been sent to the network, so it doesn't exist anywhere. Be sure to send and confirm the transaction before trying to load it. You can use something like:
const { transaction: sdkTransaction1, signers: sdkSigners1 } = Liquidity.makeCreatePoolTransaction({
poolKeys: sdkAssociatedPoolKeys,
userKeys: { payer: owner }
});
await sendAndConfirmTransaction(connection, transaction, [wallet]);
Note that this requires a connection to send and a wallet to sign. More info at https://github.com/solana-labs/solana/blob/9ac2245970de90af30bff9d1f7f35cd2d8f2bf6d/web3.js/src/util/send-and-confirm-transaction.ts#L18
You might run into other issues though, because the Raydium program isn't deployed to devnet: https://explorer.solana.com/address/675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8?cluster=devnet

Custom FCM notification sound for unity android

In my unity project I want play custom sound when I get firebase cloud message, not system default sound.
So after I followed other answers my message looks like,
{
"to": "some_key",
"notification": {
"title": "Title",
"android_channel_id": "2",
"body": "Body",
"sound": "custom_sound.wav"
}
}
and I placed custom_sound.wav in Asset/Plugins/Android/res/raw. When I unzip my .apk, I can find my sound file is in right location.
But it keeps playing system default sound. Even after I remove sound field. Is there any other thing should I check?
First: a quick tip when debugging. If you select "Export Project", you can open the generated Gradle project with Android Studio:
Occasionally you have to update the gradle wrapper, but it helps a ton debug things like "is my sound file in res/raw" without having to decompress your APK and poke around.
I think that the issue you're running into now is that sounds are now associated with NotificationChannels (as of Android O) rather than individual notifications, as noted by this StackOverflow post expressing a similar issue. Since this isn't exposed via the Unity SDK.
Fortunately, you can add a channel with Unity.Notifications.Android.
It should be as simple as creating a new
public AndroidNotificationChannel(string id, string title, string description, Importance importance)
with your id set to "2" (to match your sample notification above. Since this is a string, I would recommend giving this a better name :D).
Then you can call RegisterNotificationChannel with that channel you create as your parameter.
For example, to get your notification above to work, I believe you can write:
var notificationChannel = new NotificationChannel("2", "Channel 2 (working title)", "This is the 2nd channel", Importance.Default);
AndroidNotificationCenter.RegisterNotificationChannel(notificationChannel);
Let me know if this helps!
--Patrick

watchOS AudioRecorder has no input, doesn't ask for permission

I am using this code to show the AudioRecorder on the Apple Watch (taken from https://www.raywenderlich.com/345-audio-recording-in-watchos-tutorial)
let outputURL = chatMasterController.newOutputURL()
let preset = WKAudioRecorderPreset.narrowBandSpeech
let options: [String : Any] =
[WKAudioRecorderControllerOptionsMaximumDurationKey: 30]
presentAudioRecorderController(
withOutputURL: outputURL,
preset: preset,
options: options) {
[weak self] (didSave: Bool, error: Error?) in
guard didSave else { return }
print("finished audio to \(chatID) at \(outputURL)")
print(outputURL)
}
The Recorder pops up however it doesn't seem to take any input. The wave forms don't rise while speaking and trying to play the recording afterwards leaves me with 0.2seconds of silence no matter how long the recording is.
I've tried another app that's making use of the microphone and this app did ask me for permission to record audio. I have feared having dismissed the permission before so I have reinstalled my app which however didn't change anything - no permission being asked, no input being generated.
Is there something I've missed e.g. importing a lib?
I've now figured it out. You don't just need the Privacy - Microphone Usage Description string in your Watch app's plist - you also need to set it in the iPhone's plist.
Only setting it on the Watch does nothing, only setting it on the iPhone doesn't let you allow it on the Watch directly. So you need it on both.
No idea why this isn't documented anywhere but it fits Apple's "we are going downhill" movement :)

Got the "My test app isn't responding right now. Try again soon." error message even a clean start from Google Assistant Simulator

I am still quite new to this topic, so sorry if I didn't provide enough information.
For the first time, I copoed everything from https://developers.google.com/actions/dialogflow/first-app to learn about it, which works great.
After, I tried to create my own one, then at the end, I got this message "My test app isn't responding right now. Try again soon." from https://console.actions.google.com/project/[[PROJECT-ID]]/simulator/.
Therefore, I tried to delete everything and make a complete new start, including all the projects on https://console.actions.google.com/ and https://console.dialogflow.com.
I then copied the exact same thing from https://developers.google.com/actions/dialogflow/first-app again, but this time, I still got "My test app isn't responding right now. Try again soon." from https://console.actions.google.com/project/[[PROJECT-ID]]/simulator/.
I tried to look at firebase log, no error indeed
I tried to use the web demo from the integration tab, everything works (which means the server side code or the connection have no problem) as expected, firebase also logged the request.
I tried to use a different browser (chrome -> firefox) still not working.
Here is the response code from the Google Assistant Simulator (its kinda nothing):
{
"audioResponse": "//NExAARqQ...",
"conversationToken": "GidzaW11bG...",
"response": "My test app isn't responding right now. Try again soon.",
"visualResponse": {
"visualElements": []
}
}
And here is the debug message (yes, its nothing in there, so I'm stuck):
{
"agentToAssistantDebug": {},
"assistantToAgentDebug": {}
}
Any help would be appreciated. Thanks!
In Actions Console..
Go to Develop -> Invocation
Set a display name (Eg: Hello World) and click Save
Go to Test and type "Talk to Hello World"
Fixed the issue for me.
Make sure your Actions on Google project has a name.
I spent almost 2 days scratching my head on this. Just go to Activity controls of the relevant google account (The account that you are using for the simulator) and turn on all those switches (You may leave out Youtube related stuff).
And.....Voila, it works!
Usually, these are turned off for non-personal accounts.
Faced the same issue when I tried to change the language of app to a locale.
Try the following,
Check if the welcome intent and fallback intents have responses and training phrases
All contexts are mapped
Disable and enable testing
At least in my case, I've added 'Suggestions' for an ending scene, like below:
You can see the error on the right side log of 'Test' page:
Fix is to remove 'Suggestions' in ending scene.
I had the exact same issue and after struggling for hours I found the stupid error on my side: In my Dialogflow Agent settings, I accidentally turned on the V2 API. So my firebase function kept complaining about null intent. Hope this help.

How to hide Telegram BOT commands when it is part of a group?

I'm trying to use a Telegram BOT to send messages to a group. First, I thought that it'd be enough to know the group chat id to accomplish that, but it's not. The BOT MUST be part of that group. OK, it kind of make sense, but the problem is: When you add a BOT into a group (a large group in this case) everyone start seeing a new icon on their devices, a "slash" icon. And what do they do ? They click on it, see the list of commands, choose one of them, and all of a sudden everyone is getting a new message from the group: a "/something". Imagine dozens of people doing that ? It's pretty annoying. So, any of these would work for me:
1) Can I send messages from a BOT to a group without having that BOT in the group ?
2) Can I have a kind of "no methods" BOT, that only send messages ?
3) Can I disable "slash" icon from clients so I won't have a "bot method war" in the group ?
Thank you
No, you cannot have bots send messages to a group without being a part of that group.
You can simply not set commands with BotFather, and then clients will have no commands to display.
It is always there if a bot is in the current chat, but here is what it does with no commands set in BotFather:
I got a much better solution: there is the possibility to customize the commands directly by code, also depending from the context (i.e. private chat, groups, etc...)
This example was done by using Telegraf but that's not so different from basic code
bot.start(function(ctx) {
// If bot is used outside a group
ctx.telegram.setMyCommands(
[
{
"command": "mycommand",
"description": "Do something in private messages"
}, {
"command": "help",
"description": "Help me! :)"
}
],
{scope: {type: 'default'}}
)
// If bot is used inside a group
ctx.telegram.setMyCommands(
[
// <-- empty commands list
],
{scope: {type: 'all_group_chats'}}
)
ctx.reply('Hello! I\'m your super-cool bot!!!')
})
Bonus Point, you can also manage the command behavior by checking the source.
So, for example, if a user in a group still try to use manually your command and you don't want to execute anything:
bot.help(function(ctx) {
// Check if /help command is not triggered by a private chat (like a group or a supergroup) and do nothing in that case
if (ctx.update.message.chat.type !== 'private') {
return false
}
ctx.reply('Hi! This is a help message and glad you are not writing from a group!')
})

Resources