We are sending Push Notifcations to our Android app. We want them to appear as a "Pop-Up" by default.
The only way I have been able to get that to work dfo that seend to do that is to target a specific notification channel with an Importance of High.When we do that we do get a visible pop-up for our Notification.
The problem with setting the importance is that by default we then get a Sound playing as well.
Is the only way to get a Pop-Up to use a High Importance channel?
If using a High Importance channel how to we specify a sound of "Silent"?
As far I know you will have to set the channel sound explicitly to null:
channel.setSound(null, null)
The problem is that Android does not allow you to change a Channel sound configuration after creation. You may need to use another channel with importance HIGH and sound set to null. Hope it helps.
Related
Is there possibility to add Extra Voice Commands to Voice Guidance which is running from here maps?
-like Turn Right (From Here maps) something like- I want (Stop After turn Right)
NMAAudioManager is the central class that is used by SDK for iOS to modify the application AVAudioSession and play audio. It is the interface that the NMANavigationManager uses to play audio feedback such as voice instructions. You can also use NMAAudioManager to change whether hardware keys directly control HERE SDK volume, and also use it to set volume as a factor relative to the user's device volume.
The NMAAudioManager contains a queue of audio output objects. You can add to this queue by calling playOutput: with NMAAudioFileOutput, NMATTSAudioOutput, or your own NMAAudioOutput implementation. You can also use NMAAudioManager methods such as clearQueue, skipCurrentOutput, and stopOutputAndClearQueue to manage audio output in this queue.
Please refer below link for detailed implementation :
developer.here.com/documentation/ios-premium/dev_guide/topics/audio-management.html
I am running the entire sample application provided in RxAndroidBle from scanning to discover services to writeCharacteristic. I am trying to debug into the flow and put a break point in onWriteClick() of the CharacteristicOperationExampleActivity.java file. Clicking the WRITE button does nothing. Break point wasn't caught.
Reading the instruction from the blog RxAndroidBle
Stating that discovering characteristic is optional for write. But the way this sample app's activities are setup, one has to go thru discovering the characteristics before the Characteristic Operation page will be shown. On the characteristic page, I selected the read/write characteristic entry to get to the Operation page. Isn't that the correct way to operate the app?
Also, is there a way to handle writeCharacteristic without having to discover its characteristics? I don't want to show the characteristic view and the user has to pick the correct characteristic to be able to read and write to the BLE device.
In any case, the sample app discovered my BLE device and connected to it but failed to write to it however. Does anyone have experience with RxAndroidBle, please help.
There seems to be a bug in the example - I couldn't make it to work (despite connecting the buttons were disabled) - will need to look into it.
As for the quick-fix you can replace the onConnectToggleClick() method with:
#OnClick(R.id.connect)
public void onConnectToggleClick() {
if (isConnected()) {
triggerDisconnect();
} else {
connectionObservable
.observeOn(AndroidSchedulers.mainThread())
.doOnSubscribe(() -> connectButton.setText("Connecting"))
.subscribe(
rxBleConnection -> {
Log.d(getClass().getSimpleName(), "Hey, connection has been established!");
updateUI();
},
this::onConnectionFailure
);
}
}
The sample application is not meant to be run with any particular BLE device so to show possible BluetoothCharacteristics of an unknown device it needs to perform an explicit discovery to present them to the user. When using the library with a known device you can safely use UUIDs of BluetoothCharacteristics you're interested in without performing the discovery (it will be done underneath either way but you don't need to call it explicitly).
I'm wondering how I can use Sauce Connect and their rest api to disable disable video recording and screen shots. Thanks!
The only way I know to disable video recording and screenshots is when you create a WebDriver instance with Selenium, you have to set the desired capabilities named record-screenshots and record-video to "false". For instance, in Python:
from selenium import webdriver
desired_capabilities = dict(
webdriver.DesiredCapabilities.CHROME)
desired_capabilities["record-screenshots"] = "false"
desired_capabilities["record-video"] = "false"
driver = webdriver.Remote(
desired_capabilities=desired_capabilities,
command_executor="http://localhost:4444/wd/hub")
The REST API is meant to be used after a test has started so it would not be able to prevent the creation of the video and screenshots in the first place. I've seen no evidence that Sauce Connect would be able to do anything about this.
Here is a link to the Sauce Labs documentation (https://docs.saucelabs.com/reference/test-configuration/#disabling-video-recording) explaining how to disable the video recoding and screen captures. It's actually a desired capability that is passed as part of the test. Can you please provide more clarity on the Sauce Connect question.
You can set a boolean value as part of DesiredCapabilities to turn video recording on or off.
Suggest that it makes sense to only record video when the test fails, which is what Saucery does. It does work. Have a look at this class
I'm creating a music app using only QML and it's going really well and I'm now working on the track queue. I'm using Qt.Multimedia to play the tracks and there is a property that could be used to play next track when the current has ended, but I don't understand how to get the signal.
Here is the doc I'm using: https://qt-project.org/doc/qt-5.0/qtmultimedia/qml-qtmultimedia5-audio.html
There's a EndOfMedia that I was planning of using, but I don't understand how?
It seems reasonable to connect a Slot to the playbackStateChanged() or stopped() signal that checks the status to see if it is EndOfMedia and then plays the next track.
I have to detect avtivityLevel of microphone in Flex. I am using the activityLevel property of Microphone class but as I found out it always return -1 even if I have done Microphone.getMicrophone().
To detect activity level we have to set microphone.setLoopback = true;
Does anybody know how to do this without using loop back as I do not want to hear my sound back just monitor the activity level
Thanks
The microphone won't show any activity until it is attached to a NetStream connection. You can use a MockNetStream to fake the connection using OSMF - see my answer here.