Custom FCM notification sound for unity android - firebase

In my unity project I want play custom sound when I get firebase cloud message, not system default sound.
So after I followed other answers my message looks like,
{
"to": "some_key",
"notification": {
"title": "Title",
"android_channel_id": "2",
"body": "Body",
"sound": "custom_sound.wav"
}
}
and I placed custom_sound.wav in Asset/Plugins/Android/res/raw. When I unzip my .apk, I can find my sound file is in right location.
But it keeps playing system default sound. Even after I remove sound field. Is there any other thing should I check?

First: a quick tip when debugging. If you select "Export Project", you can open the generated Gradle project with Android Studio:
Occasionally you have to update the gradle wrapper, but it helps a ton debug things like "is my sound file in res/raw" without having to decompress your APK and poke around.
I think that the issue you're running into now is that sounds are now associated with NotificationChannels (as of Android O) rather than individual notifications, as noted by this StackOverflow post expressing a similar issue. Since this isn't exposed via the Unity SDK.
Fortunately, you can add a channel with Unity.Notifications.Android.
It should be as simple as creating a new
public AndroidNotificationChannel(string id, string title, string description, Importance importance)
with your id set to "2" (to match your sample notification above. Since this is a string, I would recommend giving this a better name :D).
Then you can call RegisterNotificationChannel with that channel you create as your parameter.
For example, to get your notification above to work, I believe you can write:
var notificationChannel = new NotificationChannel("2", "Channel 2 (working title)", "This is the 2nd channel", Importance.Default);
AndroidNotificationCenter.RegisterNotificationChannel(notificationChannel);
Let me know if this helps!
--Patrick

Related

Is this login form with possible with WebAuthn?

I'm trying to plan a rewrite of my website and I want to make it that I can login passwordless with just Windows Hello, TouchID, or FaceID using WebAuthn. All the examples online have a whole popup situation but I want it done like my mockup. I also want my website to detect the default biometric and have the biometric icon change to the icon representing the default one, for example, face icon for FaceID. This website will be done using python-flask, ReactJS, MySQL, CSS, and HTML.
There's a few different points to hit on here -
Pop-up/Modal
We'll start with this one. Unfortunately the pop-ups that appear during the WebAuthn ceremony are part of the browsers implementation. Every time the get()/create() methods are called the pop-ups will be invoked. There is some work coming out from Google/Apple in their passkey implementation where this will look more like an "autofill" experience, but you will still be required to use their pop-ups.
Defaulting to Windows Hello, Touch ID, etc..
I'll start by suggesting that you shouldn't constrain your users to only the platform authenticators. Security keys still play a big role in WebAuthn and work really well for signing in across devices. Relying on platform authenticators could limit your users to the device they initially registered with, or limit users who don't have a biometric sensor on their device.
With that being said, you can explicitly invoke the use of only platform authenticators using the PublicKeyCreationOptions. In the property authenticatorSelection there is a field authenticatorAttachment. If you set this field to "platform" then your platform authenticator will be invoked (if one is available).
Here's an example of the request sent by the relying party (note the property authenticatorSelection towards the bottom):
{
"publicKey": {
"rp": {
"name": "Example Inc",
"id": "example.com/"
},
"user": {
"name": "user",
"displayName": "user",
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
},
"challenge": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"pubKeyCredParams": [***],
"excludeCredentials": [***],
"authenticatorSelection": {
"authenticatorAttachment": "platform"
"residentKey": "preferred",
"userVerification": "preferred"
},
"attestation": "direct",
"extensions": {}
}
}
Detecting default biometric
I have a React example here. Some things to note on this approach:
There are more elegant and accurate ways of determining what platform the user is on. This snippet will work a majority of the time, but there is a lot of assumption happening based only on the detected OS
There's no icons included, I would suggest adding an imgSrc field to the enums that includes a link to the source image
Hope this helps.

Custom sound push notification does not work (Flutter)

{
"to": "XXXX",
"notification": {
"title": "ASAP Alert",
"body": "Please open your app"
},
"data": {
"screen": "/Nexpage1",
"sound": "alarm",
"click_action": "FLUTTER_NOTIFICATION_CLICK"
}
}
Above is my payload for the push notification. I have insert the alarm.mp3 file inside the raw folder, however it still does not give me the alarm sound, i have try for alarm.mp3 also, is there anything wrong with the json? of it because of the code on my dart file?
Reading this it seems that it should be manage automatically (if you didn't use a notification builder) on Android but you have to specify the .mp3 extension too and put it inside notification field and not data one..
"sound": "alarm.mp3"
iOS behaves very differently under the hood but you can use a custom sound by setting the sound: field in the notification payload too. Anyway .mp3 is not a valid APN notification file format, and you need to specify also the file extention.
"sound": "filename.caf"
Follow Apple documentation in order to forge your custom sound file for your app.
mp3 is not a valid format
Preparing Custom Alert Sounds
Local and remote notifications can specify custom alert sounds to be
played when the notification is delivered. You can package the audio
data in an aiff, wav, or caf file. Because they are played by the
system-sound facility, custom sounds must be in one of the following
audio data formats:
Linear PCM
MA4 (IMA/ADPCM)
µLaw
aLaw
Place custom sound files in your app bundle or in the
Library/Sounds folder of your app’s container directory. Custom
sounds must be under 30 seconds when played. If a custom sound is
over that limit, the default system sound is played instead.
You can use the afconvert tool to convert sounds. For example, to
convert the 16-bit linear PCM system sound Submarine.aiff to IMA4
audio in a CAF file, use the following command in the Terminal app:
afconvert /System/Library/Sounds/Submarine.aiff ~/Desktop/sub.caf -d ima4 -f caff -v
For exampole to convert your mp3 file in a caf file you could type in terminal:
afconvert -f caff -d LEI16 alarm.mp3 alarm.caf
Read this doc in order to have a deep inside of all generic and specific notifciation payload fields.
UPDATE
I've tested the Android part and I can confirm that putting your .mp3 file in res/raw/ folder the sound is played as documented and expected.
That's my notification payload:
{
"to" : "my_device_token",
"collapse_key" : "type_a",
"priority" : "high",
"notification" : {
"body" : "Test Notification body for custom sound {{datestamp}}",
"title": "Custom sound alert.mp3",
"sound": "alert.mp3"
}
}
I've tested also the iOS version after converting .mp3 file to .caf file in that way:
afconvert -f caff -d LEI16 alert.mp3 alert.caf
the same json payload with the different filename works:
{
"to" : "my_device_token",
"collapse_key" : "type_a",
"priority" : "high",
"notification" : {
"body" : "Test Notification body for custom sound {{datestamp}}",
"title": "Custom sound alert.mp3",
"sound": "alert.caf"
}
}
Remember to add the file in your main bundle.
That works if the app is terminated or in background.
If you want to show an alert and play a sound when the app is in foreground you have to manage it on onMessage event like someone already have told you here, or you can use a platform-channel here to build your own notification with a Notification.Builder on Android and a UNNotificationCenter on iOS (for example).
UPDATE
This issue has been solved. See here the official comment:
Hey all 👋
As part of our roadmap (#2582) we've just shipped a complete rework of
the firebase_messaging plugin that aims to solve this and many other
issues.
If you can, please try out the dev release (see the migration guide
for upgrading and for changes) and if you have any feedback then join
in the discussion here.
Given the scope of the rework I'm going to go ahead and close this
issue in favor of trying out the latest plugin.
Thanks everyone 🤓
ShadowSheep did a good job at answering this question, but there's one thing I want to clarify for trying to get the iOS sounds to work.
You have to add the sound into XCode (which is where ShadowSheep speaks of including the asset inside of the main bundle). You can just drag and drop the audio file (in .caf or other supported format mentioned above) into the root directory (usually called Runner for Flutter) in XCode:
If you have done this and follow the setup described in the above question/answer, you should be in business.
For me I am using the flutter_local_notifications to create the notification channel.
include this function (may create multiple notification channel)
Future<void> _createNotificationChannel(String id, String name,
String description, String sound) async {
final flutterLocalNotificationsPlugin = FlutterLocalNotificationsPlugin();
var androidNotificationChannel = AndroidNotificationChannel(
id,
name,
description,
sound: RawResourceAndroidNotificationSound(sound),
playSound: true,
);
await flutterLocalNotificationsPlugin
.resolvePlatformSpecificImplementation<
AndroidFlutterLocalNotificationsPlugin>()
?.createNotificationChannel(androidNotificationChannel);}
call the function in initState: (this created 2 notification channel)
_createNotificationChannel("channel_id_1", "channel_name", "description", "alert");
_createNotificationChannel("channel_id_2", "channel_name", "description", "alarm");
Remember to save the file of alert and alarm in the res/raw in file format of .mp3.
with this payload :
{
"notification": {
"title": "My First Notification",
"body": "Hello, I'm push notification"
},
"data": {
"title": "My First Notification"
},
"android": {
"notification": {
"channel_id": "channel_id_1"
}
},
"to": "device_token"}
For the people who try to make custom sound for both android and ios good guide here:
For iOS:
To add a custom sound for iOS, add the audio clip for iOS to your
project's App_Resources/iOS/ directory (iOS only accepts .wav, .aiff,
and .caf extensions).
For Android:
First, add the audio clip to your project's
App_Resources/Android/src/main/res/raw/ directory ((Android only
accepts .wav, .mp3 and .ogg extensions).
So we can use .wav file in order to use custom sound available for both platform:
"notification": {
"body": "Test notification",
"title": "Test Test Test",
"click_action": "FLUTTER_NOTIFICATION_CLICK",
"sound": "your_custom_sound.wav"
"android_channel_id": "channel_id_youcreated",
},
'to':
"",
},
after try this solution in Android I still can't change with custom sound. You need .ogg file type to resolve this issue. After changed to .ogg file type, I can change with custom sound. Thanks.

Alexa Echo Dot - ASK skill problems

I'm tying to make a simple test custom Alexa Skill, but I'm stuck and I'm not sure what the problem is. Maybe someone more experienced know what I'm missing?
Invocation Name
home system
Intent Schema
{
"intents": [
{
"intent": "AMAZON.HelpIntent",
"slots": []
},
{
"intent": "TestIntent",
"slots": [
{"name": "test", "type": "AMAZON.NUMBER"}
]
}
]
}
Sample Utterances
TestIntent set state {test}
TestIntent add state
I have written my own little python server on my own self hosted server, I already have a working news flash skill on the same system. I have spend plenty of time looking at the documentation, reading tutorials and I looks like I have done what I'm supposed to do.
The result I get is this:
A LaunchRequest works, both in the Service Simulator and on the Echo. It triggers a HTTP POST with the expected JSON, and I get the expected voice reply.
But the IntentRequest only works from the Service Simulator, it never works on the Echo. I say for example "alexa home system set state eight", no requests are made to my server, the echo just makes a sound and that's all.
I have no idea how to debug this, the skill is a US skill and my Echo is in US mode. I have tried to set the endpoint in both Europe and North America. Tried different trigger words, different slots, no slots ... and I have of course checked under Settings -> History to make sure that the device understood me correctly.
Any idea what to try next? How to debug this?
I found the problem, it was a classic PEBCAK (Problem Exists Between Chair And Keyboard) problem.
I had missed that I had to be much more precise how to invoke an intent (a single sentence that contains both the trigger word and intent in one go). A example of valid and working examples are:
Alexa, ask home system to set state nine
Alexa, set state twelve using home system
Alexa, tell home system set state one
I realised this when I used the alternative 2-step invoking, and realized that it worked. It had to be the way I invoked the skill, not the backend:
Alexa, open home system
(Alexa responds, and listens for the command)
Set state to eight
(Intent triggered, Alexa responds)
The first request above is the LaunchRequest
The LaunchRequest responds with shouldEndSession: false, if not the session will end. That's maps to question(...) in my code.
There are plenty of more ways to trigger the skills, a full list see this page: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/supported-phrases-to-begin-a-conversation (scroll down to the tables)
Finally thank you u-gen for the feedback, bst was a interesting project (never tried it), guess it can be really useful if you uses a hosted solution like lambda. But thanks to the docs I found flask-ask, a project that simplified my code.
Finally, the python part of my test project if someone else like to try it out.
#!/usr/bin/env python
from flask import Flask, render_template
from flask_ask import Ask
from flask_ask import statement, question, convert_errors
app = Flask(__name__)
ask = Ask(app, '/ask/')
#app.route('/')
def hello_world():
return 'Hello, World!'
#ask.launch
def launched():
return question('Welcome to Foo')
#ask.intent('TestIntent')
def hello():
return statement('Hello, world')
#ask.session_ended
def session_ended():
return "", 200
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", threaded=True)

How to hide Telegram BOT commands when it is part of a group?

I'm trying to use a Telegram BOT to send messages to a group. First, I thought that it'd be enough to know the group chat id to accomplish that, but it's not. The BOT MUST be part of that group. OK, it kind of make sense, but the problem is: When you add a BOT into a group (a large group in this case) everyone start seeing a new icon on their devices, a "slash" icon. And what do they do ? They click on it, see the list of commands, choose one of them, and all of a sudden everyone is getting a new message from the group: a "/something". Imagine dozens of people doing that ? It's pretty annoying. So, any of these would work for me:
1) Can I send messages from a BOT to a group without having that BOT in the group ?
2) Can I have a kind of "no methods" BOT, that only send messages ?
3) Can I disable "slash" icon from clients so I won't have a "bot method war" in the group ?
Thank you
No, you cannot have bots send messages to a group without being a part of that group.
You can simply not set commands with BotFather, and then clients will have no commands to display.
It is always there if a bot is in the current chat, but here is what it does with no commands set in BotFather:
I got a much better solution: there is the possibility to customize the commands directly by code, also depending from the context (i.e. private chat, groups, etc...)
This example was done by using Telegraf but that's not so different from basic code
bot.start(function(ctx) {
// If bot is used outside a group
ctx.telegram.setMyCommands(
[
{
"command": "mycommand",
"description": "Do something in private messages"
}, {
"command": "help",
"description": "Help me! :)"
}
],
{scope: {type: 'default'}}
)
// If bot is used inside a group
ctx.telegram.setMyCommands(
[
// <-- empty commands list
],
{scope: {type: 'all_group_chats'}}
)
ctx.reply('Hello! I\'m your super-cool bot!!!')
})
Bonus Point, you can also manage the command behavior by checking the source.
So, for example, if a user in a group still try to use manually your command and you don't want to execute anything:
bot.help(function(ctx) {
// Check if /help command is not triggered by a private chat (like a group or a supergroup) and do nothing in that case
if (ctx.update.message.chat.type !== 'private') {
return false
}
ctx.reply('Hi! This is a help message and glad you are not writing from a group!')
})

Open graph story is not visible in time line after posting

I am using Facebook android SDK version 4.3.0. I want post my stories via open graph story method. This is my code.
`ShareOpenGraphObject object = new ShareOpenGraphObject.Builder()
.putString("og:type", "mygame.life")
.putString("og:title", "Sample Game")
.putString("og:description", "sample game to publish story")
.build();
ShareOpenGraphAction action = new ShareOpenGraphAction.Builder()
.setActionType("mygame.ask")
.putObject("life", object)
.build();
ShareOpenGraphContent content = new ShareOpenGraphContent.Builder()
.setPreviewPropertyName("life")
.setAction(action)
.build();
shareDialog.show(activity,content);`
It's working and also returns post Id. But there is no stories in my timeline. How can I solve it?
Are you done all nesssary steps
give(request) publish_actions
create and review action and object
Eventually i found my mistake. I didn't enable "Tag" option in "Action Type" (Its essential to use setPeopleIds(peopleIds)) and also gave my app's name space wrongly.

Resources