I have to detect avtivityLevel of microphone in Flex. I am using the activityLevel property of Microphone class but as I found out it always return -1 even if I have done Microphone.getMicrophone().
To detect activity level we have to set microphone.setLoopback = true;
Does anybody know how to do this without using loop back as I do not want to hear my sound back just monitor the activity level
Thanks
The microphone won't show any activity until it is attached to a NetStream connection. You can use a MockNetStream to fake the connection using OSMF - see my answer here.
Related
I am running the entire sample application provided in RxAndroidBle from scanning to discover services to writeCharacteristic. I am trying to debug into the flow and put a break point in onWriteClick() of the CharacteristicOperationExampleActivity.java file. Clicking the WRITE button does nothing. Break point wasn't caught.
Reading the instruction from the blog RxAndroidBle
Stating that discovering characteristic is optional for write. But the way this sample app's activities are setup, one has to go thru discovering the characteristics before the Characteristic Operation page will be shown. On the characteristic page, I selected the read/write characteristic entry to get to the Operation page. Isn't that the correct way to operate the app?
Also, is there a way to handle writeCharacteristic without having to discover its characteristics? I don't want to show the characteristic view and the user has to pick the correct characteristic to be able to read and write to the BLE device.
In any case, the sample app discovered my BLE device and connected to it but failed to write to it however. Does anyone have experience with RxAndroidBle, please help.
There seems to be a bug in the example - I couldn't make it to work (despite connecting the buttons were disabled) - will need to look into it.
As for the quick-fix you can replace the onConnectToggleClick() method with:
#OnClick(R.id.connect)
public void onConnectToggleClick() {
if (isConnected()) {
triggerDisconnect();
} else {
connectionObservable
.observeOn(AndroidSchedulers.mainThread())
.doOnSubscribe(() -> connectButton.setText("Connecting"))
.subscribe(
rxBleConnection -> {
Log.d(getClass().getSimpleName(), "Hey, connection has been established!");
updateUI();
},
this::onConnectionFailure
);
}
}
The sample application is not meant to be run with any particular BLE device so to show possible BluetoothCharacteristics of an unknown device it needs to perform an explicit discovery to present them to the user. When using the library with a known device you can safely use UUIDs of BluetoothCharacteristics you're interested in without performing the discovery (it will be done underneath either way but you don't need to call it explicitly).
I'm wondering how I can use Sauce Connect and their rest api to disable disable video recording and screen shots. Thanks!
The only way I know to disable video recording and screenshots is when you create a WebDriver instance with Selenium, you have to set the desired capabilities named record-screenshots and record-video to "false". For instance, in Python:
from selenium import webdriver
desired_capabilities = dict(
webdriver.DesiredCapabilities.CHROME)
desired_capabilities["record-screenshots"] = "false"
desired_capabilities["record-video"] = "false"
driver = webdriver.Remote(
desired_capabilities=desired_capabilities,
command_executor="http://localhost:4444/wd/hub")
The REST API is meant to be used after a test has started so it would not be able to prevent the creation of the video and screenshots in the first place. I've seen no evidence that Sauce Connect would be able to do anything about this.
Here is a link to the Sauce Labs documentation (https://docs.saucelabs.com/reference/test-configuration/#disabling-video-recording) explaining how to disable the video recoding and screen captures. It's actually a desired capability that is passed as part of the test. Can you please provide more clarity on the Sauce Connect question.
You can set a boolean value as part of DesiredCapabilities to turn video recording on or off.
Suggest that it makes sense to only record video when the test fails, which is what Saucery does. It does work. Have a look at this class
I'm creating a music app using only QML and it's going really well and I'm now working on the track queue. I'm using Qt.Multimedia to play the tracks and there is a property that could be used to play next track when the current has ended, but I don't understand how to get the signal.
Here is the doc I'm using: https://qt-project.org/doc/qt-5.0/qtmultimedia/qml-qtmultimedia5-audio.html
There's a EndOfMedia that I was planning of using, but I don't understand how?
It seems reasonable to connect a Slot to the playbackStateChanged() or stopped() signal that checks the status to see if it is EndOfMedia and then plays the next track.
FMLE = Flash live media encoder 3.0
i have posted this question on Adobe Forum, but not sure if they have people on that forum with programming experience.
I am a developer writing a video capture and audio capture device. The devices already work in other encoders. The devices are written in directshow. I am integrating with FMLE and encountered this problem.
The audio device doesnt have a usable volume bar in FMLE. The FMLE error is "The selected audio device "censored (company secret)" doesn't allow setting volume intensity. Disabling the volume slider control."
my audio device implements these interfaces along with the standard directshow filter interfaces
IBasicAudio
IAMAudioInputMixer
I put tracepoints in queryinterface and found FMLE query's for (my comments in comment string)
{IID_IUnknown}
{IID_IPersistPropertyBag}
{IID_IBaseFilter}
{IID_IAMOpenProgress}
{IID_IAMDeviceRemoval}
{IID_IMediaFilter}
{IID_IAMBufferNegotiation}
{IID_IAMStreamConfig}
{IID_IPin}
{IID_IReferenceClock}
{IID_IMediaSeeking}
{IID_IMediaPosition}
{IID_IVideoWindow} // WTF ?? query video window ?
{IID_IBasicAudio}
{2DD74950-A890-11D1-ABE8-00A0C905F375} // i think this is async stream,
What am i missing ? FMLE doesnt use IAMAudioInputMixer ?
Anyone know the exact interface which FMLE uses for Volume intensity ? . .I assumed it was IBasicAudio, but it doesnt seem to call any methods in there.
Answer provided by Ram Gupta of adobe forum.
"FMLE does not query for CLSID_AudioInputMixerProperties interface.
FMLE enumerates all the pin of audio source filter(using EnumPins) and then it extracts each pin info using QueryPinInfo Function.
FMLE searches for the audio filter Pin whose direction is PINDIR_INPUT(using QueryPinInfo) and then it queries for IAMAudioInputMixer interface to set the volume level.
Could you pls chk if the following functions are properly implemented
-->get_enable: it should set its parameter value to true.
-->put_MixLevel
-->QueryPinInfo:"
This solution did work. My problem was that because i never declared an input pin (since i dont have any directshow related input).
I am using Flex 3 and FMS3 from where I and sending a videostream. I want the user to be able to pause the stream, then resume it.
For this I am using the methods pause() and resume(). The problem is, when I call pause() the bufferLength is released and equals zero. Accordingly when I resume, the NetStream needs to start buffering all over again, which means I loose all video from the second I paused untill I press resume. And the intention of pause and resume seems of significance.
Any help?
Please see my other question for this.
Create server-side DVR application to be able to record DVR in FMS
If possible, please close this one.
Regards Niclas