Enabling two apps to use a single sound device - asterisk

I have:
USB Sound which is alsa "Device" or "hw:1,0"
Asterisk console configured to use "plughw:1,0"
This works, letting me use the USB Sound for making and receiving voice calls via Asterisk.
I also want to use multimon to decode DTMF tones during the call. If I stop Asterisk I can run "aoss multimon -T DTMF" to decode the tones successfully but in order to do so I had to create an /etc/asoundrc file like so:
pcm.dsp0 { type plug slave.pcm "hw:1,0" }
Starting Asterisk, which grabs the "plughw:1,0" means I get an error trying to run multimon. I believe this is because only one app can access an alsa device at any one time.
I think I need to split the hw:1,0 into two new alsa devices, which I have been trying to do using alsa plugins (dmix/multi) but I'm afraid I can't get my head around how to get these configured!
p.s. I want to use multimon as I also have other use cases for using it on the same setup to decode other tones than just DTMF.

As #CL have pointed, you could use dsnoop for analysing the audio trhough multimon. The following extract has been taken from Basic Virtual PCM Devices for Playback/Capture, ALSA | nairobi-embedded
The dmix[:$CARD...] and dsnoop[:$CARD...] Virtual PCM Devices
A limitation with the hw[:$CARD...] and plughw[$CARD...] virtual PCM devices (on systems with no hardware mixing) is that only one application at a time can access an audio stream. The dmix[:$CARD...] (playback) and dsnoop[:$CARD...] (capture) virtual PCM devices allow mixing or sharing, respectively, of a single stream among several applications without the intervention of a sound server, i.e. these devices use native ALSA library based mechanisms.

Related

Which protocol to use for multi-stream application ? RTMP?

I'm trying to stream several iPad screen to a single python client (computer) on a local network but I don't know which protocol to use.
I can do it with 1 Ipad using MonaServer, an app that stream on RTMP and a little Python script to read the video.
But I am dealing with problems to use several Ipads because as I saw RTMP uses a single port on Windows, :1935 and I am not sure it's possible to multi-stream with RTMP.
I am not a pro with networking, so if you have any suggestions I'm open
What you need is to following the wiki and usage of open source projects, to get some instincts about multiple clients live streaming.
For example, you could use OBS to publish some streams to a media server, like SRS, play it by different protocols like RTMP/HTTP-FLV/HLS/WebRTC.
You could publish multiple streams, they are not mutually exclusive. And play by different players, depends on the protocol you chose, please read this post.
Try it.

How to create a local virtual IP camera that can be accessed from other software

I need to create several local virtual IP Cameras for a project I'm making. I have tried several software, and the closest I have gotten was with magic camera, because it would let me create a virtual camera, but it wont let me assign a source to that camera. I need to assign an IP address and a username with a password, so that I access the IP camera's video and use that virtual camera in a program I'm developing. the thing is that the Camera's brand is not supported by Labview, so I need to use a virtual local camera to use these cameras (3S Vision IP Cameras).
Thanks in advance!
From the National Instruments Support Knowledgebase:
Connecting to an Arbitrary MJPEG IP Camera with IMAQdx Using Third Party Virtual Camera Emulator
http://digital.ni.com/public.nsf/allkb/9446A8C25CC99F7586257A56004D513D
Here are the options for using an IP cameras in LabVIEW as of 2019:
(in case someone like me still need this)
Use Vision Acquisition Software 14.5 (February 2015)
(with LabVIEW 2014 SP1 and Vision Development Module 2014-2017 SP1)
Pros:
Official, native support;
Any # of cameras.
Cons:
You lose all the features introduced in newer versions of LabVIEW;
Cameras must support and be configured to stream in MJPEG over HTTP.
Additional info:
It's the last version to support arbitrary IP cameras. Basler and Axis IP cameras were supported until VAS 19.0.
Cameras in the same subnet should be detected automatically. If cameras are in another network, you can try to add them manually as follows:
Go to the %Public%\Documents\National Instruments\NI-IMAQdx\Data\ folder;
Open or create a file IPCameras.ini in a text editor;
If creating, place an IPCameras section on a first line:
[IPCameras]
Add a line for each camera in a following format:
cameraSerialNumber = IPAddress, MJPEG stream URL, camera brand, camera description
Save your changes and restart NI MAX.
Use DirectShow device (webcam) emulator
NI-IMAQdx driver supports USB 2.0 cameras through the DirectShow interface. By using software which creates such interface for IP cameras, they can be used as regular USB 2.0 cameras.
There are multiple tools available:
IP Video Source
Pros:
Free;
Any # of cameras.
Cons:
Each camera must be added manually through emulator's settings;
Each camera's resolution must be set manually in emulator's settings;
Cameras must support and be configured to stream in MJPEG over HTTP(S);
32/64-bit versions work independently of each other. NI MAX is a 32-bit application, so it won't show cameras emulated by 64-bit tool. However, they are still detected and can be used in LabVIEW with IMAQdx VIs.
Additional info:
Camera's alias displayed in LabVIEW can be changed in a following way:
Go to the %Public%\Documents\National Instruments\NI-IMAQdx\Data\ folder;
Select one of camX.iid files and open it in a text editor;
Find an attribute InterfaceName and set its value to a desired name. See Vendor attribute's value for a name you set to that camera in emulator's settings;
Save your changes and rename this file to the same name;
Restart LabVIEW.
Moonware Universal Source Filter [more info]
Pros:
Supports JPEG/MJPEG/MPEG4/H264 over HTTP/RTSP;
Hardware decoding;
Low latency;
Multiple cameras.
Cons:
32-bit only. 64-bit version is not likely to happen;
Adds a watermark to an image (free version) / Paid: $49 per PC (no watermark);
Each camera must be added manually through emulator's settings.
and more
Use Multimedia for LabVIEW add-on
Pros:
Native interface (LabVIEW API for FFmpeg libraries);
Supports most codecs and protocols;
Any # of cameras;
Full control over data acquisition and processing (down to individual FFmpeg options).
Cons:
Paid: $949 per PC (developer license), $19 per PC (runtime license), 30 days trial;
Lower-level analog of NI-IMAQdx driver (see more complicated).
Use libVLC to receive images from a camera
(or another similar library)
Pros:
Free;
Supports most codecs and protocols;
Possibility of hardware decoding (depends on library usage);
Any # of cameras.
Cons:
You'll have to interface with libVLC library directly through the Call Library function node;
To receive video frame by frame you'll have to write a simple C library which will be called by libVLC to provide a frame (see example below in a linked thread).

How to implement a "fax protocol"?

I want to write a program that programmatically sends faxes. Or receives faxes. But not with a modem. I guess I'm trying to write a fax simulator. Everything that the hardware does, I want to do using software.
There are a billion SO questions on the topic, but they either suggest an online service to use or they point me to a library, which talks to my computer's modem. So here are my specific questions:
When I send a fax, I can hear the warbling on the telephone line. This tells me that my fax machine is generating tones that are consumable by the recipient's. What is that protocol? Is there an RFC which specifies how a "pixel" is converted to a "frequency"? Do the machines communicate back and forth, or is it one-way?
If we can agree that a fax machine translates sound frequencies to images, then one ought to be able to write a program which takes an MP3 of a fax transmission and outputs a graphic. What do I need to know in order to do this?
Are these questions based on any flawed assumptions? Where should I start so that I can accomplish goal #2 from above?
Actually in modem a chip called "DSP => Digital Signal Processing" is responsible to convert audio signals into digital DATA. and same can be done with a software library. there is already an open source DSP library called SpanDSP developed by "Steve Underwood" http://www.soft-switch.org/.
You can build your own application while using SpanDSP library, but it is wise to use some existing implementation of SpanDSP. Currently SpanDSP is implemented in open source FreeSwitch, CallWeaver and Asterisk PBX systems.
But if you only want to send and receive faxes without bothering low level development then try out ICTFAX Open Source FAX system.
The fax specifications you would need are ITU T4 and T30, which costs lots of money and are almost wilfully difficult to understand, and they'll refer you to the various modem standard for how the actual 'warbling' is done.
If you're hoping for something free/easy like an RFC, then you should probably give up now.
If you did want to decode an audio file, you would need to view that as two completely separate tasks - firstly decoding the tones to a data stream (build several soft-modems, for the various ways fax machines can agree to communicate), and then secondly decoding the data-stream to pixels (write a fax machine's software).
You are not fundamentally wrong that a fax machine converts light and dark into sound and then back again, or that it's possible to eavesdrop on a conversation between two fax machines and recover the image (either in real-time or via some kind of capture file, though I'm not sure that MP3 would work), but I suspect you've hugely, hugely underestimated the amount of work involved.
http://en.wikipedia.org/wiki/Fax
has plenty of background.
The ITU protocols are very involved, IIRC the exact specifications are not free.

Howto pipe raw PCM-Data from /dev/ttyUSB0 to soundcard?

I'm working currently on a small microhpone, connected to PC via an FPGA. The FPGA spits a raw datastream via UART/USB into my computer. I'm able to record, play and analyze the data.
But I can't play the "live" audiostream directly.
What works is saving the datastream in PCM raw-format with a custom made C-program, and piping the content of the file into aplay. But that adds a 10sec lag into the datastream... Not so nice for demoing or testing.
tail -f snd.raw | aplay -t raw -f S16_LE -r 9000
Does someone have another idea, how get the audiostream faster into my ears? Why does
cat /dev/ttyUSB0 | aplay
not work? (nothing happens)
Thanks so far
marvin
You need an api that lets you stream audiobuffers directly to the soundcard. I haven't done it on Linux, but I've used FMOD for this purpose. You might find another API in this question. SDL seems popular.
The general idea is that you set up a streaming buffer, then your c program stuffs the incoming bytes into an array. The size is chosen to balance lag with jitter in the incoming stream. When the array is full, you pass it to the API, and start filling another one while the first plays.
That would seem to be the domain of the alsaloop program. However, this program requires two ALSA devices to work with and you can see from its options that it goes to considerable effort in order to match the data flow of the devices, something that you would not necessarily want to do yourself.
This Stackoverflow topic talks about how to create a virtual userspace device available to ALSA: maybe that is a route worth pursuing.

Flex / AIR - Can it receive SYSLOG notices?

Is there a way for Flex / AIR to receive syslog notices from devices such as cisco switches etc? Does anyone know of any information I can read or sites to look at?
If you are talking about calling native methods, flex/air cannot do that. But there is an opensource AIR-Java bridge namely merapi that lets you connect your AIR app with java code. I guess java should be able to do what you are looking for.
AIR apps can read local files. In the case of receiving syslog notices from cisco switches or other devices, I used to set them up on the receiving machine to be added to the local syslog there to have everything in one place (we're not talking windows here :-)). Using mtail and grep I had a few consoles open that showed me what was coming in.
If you write an Actionscript parser to read your local syslog - using bytearray like described here - then it should be possible to read through the whole file and note the interesting bits (I haven't done this myself, so no guarantees!)
If it more a question of getting real time data from devices, I would look into snmp (but you will probably need to write additional stuff in php or python to query the devices for you).

Resources