mergExt: using merAV for audio recording only - audio-recording

is it possible to use LC ext mergAV to record just audio to save as AIFF or WAVE?
if it is, where can i find examples about?
tia.
re.mu.

No there's two other options for audio recording. RunRev's example external rremicrophone (in the iOS Externals SDK) and my derivative mergMicrophone available at http://github.com/montegoulding/mergmicrophone and binaries are available from mergExt.com.

Related

Unix what is /dev/cua* device ? and what is it used for?

I want to know what are the devices /dev/cua* used for ? and what does "Cua" stand for ?
Kind regards
According to the Open BSD Device Drivers Manual.
"For hardware terminal ports, dial-out is supported through matching device nodes called calling units. For instance, the terminal called /dev/tty03 would have a matching calling unit called /dev/cua03."

Is there possibility to add ExtraVoice Commands to Voice Guidance which is running from heremaps?

Is there possibility to add Extra Voice Commands to Voice Guidance which is running from here maps?
-like Turn Right (From Here maps) something like- I want (Stop After turn Right)
NMAAudioManager is the central class that is used by SDK for iOS to modify the application AVAudioSession and play audio. It is the interface that the NMANavigationManager uses to play audio feedback such as voice instructions. You can also use NMAAudioManager to change whether hardware keys directly control HERE SDK volume, and also use it to set volume as a factor relative to the user's device volume.
The NMAAudioManager contains a queue of audio output objects. You can add to this queue by calling playOutput: with NMAAudioFileOutput, NMATTSAudioOutput, or your own NMAAudioOutput implementation. You can also use NMAAudioManager methods such as clearQueue, skipCurrentOutput, and stopOutputAndClearQueue to manage audio output in this queue.
Please refer below link for detailed implementation :
developer.here.com/documentation/ios-premium/dev_guide/topics/audio-management.html

Intra Frame Settings for UtVideo Codec in VirtualDub

In my previous question I'd asked for a convenient codec which supports intra frames and prevents onion skin animation. Now I'm using the UtVideo codec.
Codec which supports intra-frame
Which UtVideo version shall I use?
And how get I the right setting for capturing a video frame by frame?
All variants are good but correspond to different YUV matrix/sampling: normally you should know what your capture source is and select the same.
If you capture screen there should be UtVideo RGB option, not sure why you dont have it. Try latest installer http://umezawa.dyndns.info/wordpress/?p=6107

How do I find ANY beacon using the AltBeacon android reference library?

I'm using the altbeacon android reference library for detecting beacons.
There is an option to configure the parser to detect other non-altbeacon beacons e.g. Estimote (as described here) by adding a new BeaconParser (see this) which works a treat.
However, how do I allow it to detect ALL beacons of any UUID/format (altbeacons, estimotes, roximity etc)? I've tried no parsers, blank parameters and without the "m:2-3=.." parameter. Nothing works.
Thanks
You can configure multiple parsers to be active at the same time so you can detect as many beacon types as you want simultaneously. But there is no magic expression that will detect them all.
Understand that the BeaconParser expression tells the library how to decode the raw bytes of a Bluetooth LE advertisement and convert it into identifiers and data fields. Each time a company comes up with a new beacon transmission format, a new parser format may be needed.
Because of intellectual property restrictions, the library cannot be preconfigured to detect proprietary beacons without permission. This is why you must get the community-provided expressions for each proprietary type.

Interface for Volume intensity in FMLE

FMLE = Flash live media encoder 3.0
i have posted this question on Adobe Forum, but not sure if they have people on that forum with programming experience.
I am a developer writing a video capture and audio capture device. The devices already work in other encoders. The devices are written in directshow. I am integrating with FMLE and encountered this problem.
The audio device doesnt have a usable volume bar in FMLE. The FMLE error is "The selected audio device "censored (company secret)" doesn't allow setting volume intensity. Disabling the volume slider control."
my audio device implements these interfaces along with the standard directshow filter interfaces
IBasicAudio
IAMAudioInputMixer
I put tracepoints in queryinterface and found FMLE query's for (my comments in comment string)
{IID_IUnknown}
{IID_IPersistPropertyBag}
{IID_IBaseFilter}
{IID_IAMOpenProgress}
{IID_IAMDeviceRemoval}
{IID_IMediaFilter}
{IID_IAMBufferNegotiation}
{IID_IAMStreamConfig}
{IID_IPin}
{IID_IReferenceClock}
{IID_IMediaSeeking}
{IID_IMediaPosition}
{IID_IVideoWindow} // WTF ?? query video window ?
{IID_IBasicAudio}
{2DD74950-A890-11D1-ABE8-00A0C905F375} // i think this is async stream,
What am i missing ? FMLE doesnt use IAMAudioInputMixer ?
Anyone know the exact interface which FMLE uses for Volume intensity ? . .I assumed it was IBasicAudio, but it doesnt seem to call any methods in there.
Answer provided by Ram Gupta of adobe forum.
"FMLE does not query for CLSID_AudioInputMixerProperties interface.
FMLE enumerates all the pin of audio source filter(using EnumPins) and then it extracts each pin info using QueryPinInfo Function.
FMLE searches for the audio filter Pin whose direction is PINDIR_INPUT(using QueryPinInfo) and then it queries for IAMAudioInputMixer interface to set the volume level.
Could you pls chk if the following functions are properly implemented
-->get_enable: it should set its parameter value to true.
-->put_MixLevel
-->QueryPinInfo:"
This solution did work. My problem was that because i never declared an input pin (since i dont have any directshow related input).

Resources