Is there possibility to add ExtraVoice Commands to Voice Guidance which is running from heremaps? - here-api

Is there possibility to add Extra Voice Commands to Voice Guidance which is running from here maps?
-like Turn Right (From Here maps) something like- I want (Stop After turn Right)

NMAAudioManager is the central class that is used by SDK for iOS to modify the application AVAudioSession and play audio. It is the interface that the NMANavigationManager uses to play audio feedback such as voice instructions. You can also use NMAAudioManager to change whether hardware keys directly control HERE SDK volume, and also use it to set volume as a factor relative to the user's device volume.
The NMAAudioManager contains a queue of audio output objects. You can add to this queue by calling playOutput: with NMAAudioFileOutput, NMATTSAudioOutput, or your own NMAAudioOutput implementation. You can also use NMAAudioManager methods such as clearQueue, skipCurrentOutput, and stopOutputAndClearQueue to manage audio output in this queue.
Please refer below link for detailed implementation :
developer.here.com/documentation/ios-premium/dev_guide/topics/audio-management.html

Related

Python Library for Stimulating Mouse Position and Keyboard presses

Please Suggest Python Library to controll Mouse Position While Playing Games Like CS GO . I have Used PynPut ,win32api..,Both not working while playing game
The way that this works in the Win32 API is that the input commands are handled with a chain of hooks. Simply speaking the input device sends input to the OS and the OS sends input to the running applications, an application can attach itself to this hook system and choose to suppress the handled input command or pass it along the chain. In the case of some modern games, they take full control of the input chain by not passing the input commands they handle along the chain.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms644960%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396

Estimote Proximity Profile UUID for Android Development

I'm trying to make an Android program that will always scan for a specific Bluetooth device, and alert the user when the phone is within proximity.
I modified the demo code provided here: https://github.com/devunwired/accessory-samples/tree/master/BluetoothGatt
The second demo here, (titled "BeaconActivity,") constantly scans for Bluetooth devices with the thermometer service. For testing purposes, I am trying to make it scan for the proximity of an Estimote. I do not want to use the provided Estimote SDK since I plan on using a more generic Bluetooth device in the future.
In the above "BecaonActivity" a UUID for the thermometer service is defined. I tried switching this number out for the UUID for Estimotes defined on this page: https://community.estimote.com/hc/en-us/articles/200761958-Advertising-Packet-Estimote-s-Proximity-UUID
From the above linked source code, there is also a "TemperatureBeacon" class that has a "short-form UUID" of "0x1809." I realized that this was just the 5th-8th character in the full thermometer service UUID, so I changed it to "0x7F30".
After mostly just changing the UUIDs and leaving most of the code the same, I tested it on my phone, but it could not detect the Estimote. Any ideas about what I'm doing wrong?
This is Wojtek Borowicz, a community evangelist at Estimote. We're not ready yet to make specs for thermometer available for Android. Stay tuned!
Cheers.
I am not familiar with the demo code you provided but did you actually try to use the estimote proximty uuid (https://github.com/Estimote/Android-SDK) ?
private static final String ESTIMOTE_PROXIMITY_UUID = "B9407F30-F5F8-466E-AFF9-25556B57FE6D";
This might be helpful as well:
Check if Bluetooth Low Energy Beacons are nearby in Android
As David points out, for android devices you do not really have to consider UUIDs or services
if you are only interrested in proximity.

DirectShow - Order of invocation of IAMStreamConfig::SetFormat and ICaptureGraphBuilder2::RenderStream creates issues in some video cameras

I have to configure my video camera display resolution before capturing and processing the data. Initially I did it as follows.
Created all necessary interfaces.
Added camera and renderer filters
Did RenderStream with Capture and Preview PIN Categories.
Then did the looping through AM_MEDIA_TYPE structures and setting the params.
This worked for a lot of cameras, but a few cameras failed. Then I changed the order of 3 and 4 given above. That is, I did the setting of params before the RenderStream. This time, the error cases went through, but a few On board cameras in SONY VAIO laptop etc seem to fail.
Now, my questions are
Which is the optimal and correct method of getting and setting AM_MEDIA_TYPE parameters and running the graph?
If there are different cameras, if I get an indication of which order is the best for a particular camera by going through the camera's DirectShow interfaces, that will also serve my purpose.
Please help me in this at the earliest,
Thanks and regards,
Shiju
IAMStreamConfig::SetFormat needs to be used to set capture format before the pin is connected and rendered. This way the downstream subchain of filters is built with proper media types.

change recording file programmatically in directshow

I made a console application, using directshow, that record from a live source (now a webcam, then a tv capture card), add current date and time in overlay and then save audio and video as .asf.
Now I want that the output file is going to change every 60 minutes without stopping the graph. I must not loose any seconds of the live stream.
The graph is something like this one:
http://imageshack.us/photo/my-images/543/graphp.jpg/
I took a look at the GMFBridge but I have some compiling problem with their examples.
I am wondering if there is a way to split what exist from the overlay filter and audio source, connect them to another asf writer (paused) and then switch them every 60 minutes.
The paused asf filter's file name must change (pp.asf, pp2.asf, pp4.asf ...). Something like this:
http://imageshack.us/photo/my-images/546/graph1f.jpg/
with pp1 paused. I found some people in internet that say that the asf writer deletes the current file if the graph does not go in stop mode.
Well, I have the product (http://www.videophill.com) that does exactly what you described (its used for broadcast compliance recording purposes) - and I found that only way to do that is this:
create a dshow graph that will be used only to capture the audio and video
then, at the end of the graph, insert samplegrabber filters, both for audio and video
then, use IWMWritter to create and save wmv file, using samples fetched from samplegrabber filters
when time comes, close one IWMWritter and create another one.
That way, you won't lose single frame when switching the output files.
Of course, there is also question of queue-ing and storing the samples (when switching the writters) and properly re-aligning the audio/video timestamps, but from my research, that's the only 'normal' way to do it, and I used in practice.
The solution is in writing a custom DShow filter with two input pins in your case. One for audio stream and the other for video stream. Inside that filter (doesn't have to be inside from the architecture point of view, because you can also use callbacks for example and do the job somewhere else) you should create asf files. While switching files, A/V data would be stored in cache (e.g. big enough circular buffer). You can also watch and modify A/V sync in that filter. For writing ASF files I would recommend Windows Media Format SDK.You can also add output pins if you like to pass A/V data further if necessary for preview, parallel streaming etc...
GMFBridge is a viable, but complicated solution, a more direct approach I have implemented in the past is querying your ASF Writer for the IWMWriterAdvanced2 interface and setting a custom sink. Within that interface you have methods to remove and add sinks to your ASF writer. The sink automatically connected will write to the file that you speficifed. One way to write whereever you want to is
1.) remove all default sinks:
pWriterAdv->RemoveSink(NULL);
2.) register a custom sink:
pWriterAdv->AddSink((IWMWriterSink*)&streamSink);
The custom sink can be a class that implements IWMWriterSink, which requires implementing callback methods that are called i.e. when the ASF header is written (OnHeader(/* [in] */ INSSBuffer *pHeader);) and when a data packet is written (OnDataUnit(/* [in] */ INSSBuffer *pDataUnit);) - in your implementation you can then write them wherever you want, for example offer additional methods on this class where you can specify the file name you want to write to.
Note that this solution does not quite get you were you want to if you need to write out the header information in each of the 60 minute files - after the initial header you will only get ASF packet data. A workaround for that could be to re-write the intial header before any packet data of each file, however this will produce an unindexed (non-seekable) ASF file.

Dial a number and play a voice file instead of microphone input

1-we are trying to write an application which dial a number and play a voice file instead of microphone input. Is it possible in Maemo (N900)?
we can not find any ""Answering Machine " like program in N900. is this means that there is no way to play a voice file instead of Microphone input?
There is a way. Play that voice file and make pulseaudio believe it's a proper input, and disable the microphone input. For more information see my question:
How to redirect from Audio Output to Mic Input using PulseAudio?
It is possible but you need a good pulseaudio knowledge to do it, I can already set it easily on my PC using pavucontrol. Drop me a message (or better, answer my question) if you bite the bullet and decide to learn how to use pactl/pacmd.

Resources