Stream music or audio files via Skype - unix

I want to stream music or audio files via Skype to another person via internet. I'm using debian squeeze. My idea: open the audio in stream for microphones etc. and adding the audio file to the stream. I don't want to use a audio in-out bridge, but a software solution. Are there any similar project? How can I manipulate the audio-in stream?

Linrad is your best bet.
The Linux sound system ALSA has a mechanism by which the output of one program can be sent to the input of another program.
http://www.sm5bsz.com/linuxdsp/install/snd-aloop.htm

Related

How to compile audio and videos from a stream mpd file?

I have downloaded a DRM protected content audio and video files with a stream.mpd file. The audio and video files are encrypted using a key that can be found in stream.mpd file. So, how can I decrypt it and compile audio and video files and make a running mp4 file?
Just a quick check first - if the video and/or audio is protected by a standard DRM it would not be normal for the key to be included in the mpd file, so I am guessing you are using Clearkey protection (https://github.com/Dash-Industry-Forum/ClearKey-Content-Protection).
Assuming this is the case you can concatenate the segments into an mp4 file - see example and also some discussion on limitation on windows systems here: https://stackoverflow.com/a/27017348/334402
You can use ffmpeg to decrypt - e.g:
ffmpeg -decryption_key {key} -I {input-file} {output-file}
(https://ffmpeg.org/ffmpeg-formats.html#Options-1)
One thing to also be aware of is that most dash videos will have multi bit rate renditions and your client will download whatever bitrate is appropriate for the device and network conditions at any point during the streaming. For this reason you may have a mix of bit rates/resolutions and hence quality in the final video. If this is an issue your client may allow you to select a particular bitrate for the entire video instead of switching.

Enabling two apps to use a single sound device

I have:
USB Sound which is alsa "Device" or "hw:1,0"
Asterisk console configured to use "plughw:1,0"
This works, letting me use the USB Sound for making and receiving voice calls via Asterisk.
I also want to use multimon to decode DTMF tones during the call. If I stop Asterisk I can run "aoss multimon -T DTMF" to decode the tones successfully but in order to do so I had to create an /etc/asoundrc file like so:
pcm.dsp0 { type plug slave.pcm "hw:1,0" }
Starting Asterisk, which grabs the "plughw:1,0" means I get an error trying to run multimon. I believe this is because only one app can access an alsa device at any one time.
I think I need to split the hw:1,0 into two new alsa devices, which I have been trying to do using alsa plugins (dmix/multi) but I'm afraid I can't get my head around how to get these configured!
p.s. I want to use multimon as I also have other use cases for using it on the same setup to decode other tones than just DTMF.
As #CL have pointed, you could use dsnoop for analysing the audio trhough multimon. The following extract has been taken from Basic Virtual PCM Devices for Playback/Capture, ALSA | nairobi-embedded
The dmix[:$CARD...] and dsnoop[:$CARD...] Virtual PCM Devices
A limitation with the hw[:$CARD...] and plughw[$CARD...] virtual PCM devices (on systems with no hardware mixing) is that only one application at a time can access an audio stream. The dmix[:$CARD...] (playback) and dsnoop[:$CARD...] (capture) virtual PCM devices allow mixing or sharing, respectively, of a single stream among several applications without the intervention of a sound server, i.e. these devices use native ALSA library based mechanisms.

Creating http video stream using libVLC

I have a video that I want to broadcast on a network using the http protocol.
I know libVLC can do that but I haven't been able to know how. I checked the doxygen documentation, but this hasn't helped me. Can you help me please ?
Thanks
Libvlc is the library to develop the application. vlc player is the client AND SERVER application to do that. If you just need to stream, use vlc media player as a server. You can find the command line / GUI steps if you google "vlc how to stream".
Basically in the file open dialog you get the option to configure either to load a stream from another source or local file OR RUN your own application as a streaming server.
The play button at the bottom of open dialog has a small button on the right to selec "stream" instead of play. But you need to have configured all options correctly to setup the type of stream you are looking for.
Lastly, you can run another instance of vlc as client to test your stream locally.

DVB Recording of a channel

I'm trying to record a DVB-Channel with a DVB-T Tuner.
I already did much research on this topic but I don't get really "information" what to do.
Basically I'm already able to create a own Graph with the default GraphEdit, make a tune request and watch a channel. Converting the Graph to C# Code with the DirectShowLib or to C++ isn't a big problem for me.
But what I don't know, what is the right approach to record the movie. (Without decode it to mpeg / avi and so on.)
The most important parts of the graph are some tuning related filters, they connect to the demultiplexer (demux), and the demux will output a video and audio stream.
The easiest way to get the mpeg stream is putting a filter before the demux. For example a samplegrabber. There you will receive the complete transport stream as it is broadcasted. But that normally contains multiple programs which are multiplexed on the same frequency. If you only need one program, you need to filter the other programs out of the stream.
If you only need a single program, it is probably easier to directly connect the audio and video stream coming out of the demultiplexer, to a multiplexer, and write it's output to a file. You need to make sure there is no decoder or any other filter between the demux and the mux. The problem is that you need to find a directshow multiplexer, as windows does not contain a standard multiplexer. I don't know any free multiplexer.
What you also can do is write the audio and video directly to a file. (again without decoding, or anything else). Then use for example ffmpeg to join the audio and video to a single file.
C:\> ffmpeg -i input.m2v -i input.mp2 -vcodec copy -acodec copy output.mpg
You probably also need to delay the audio or video stream to get them in sync.
One addition, of course you can also use ffmpeg to convert the multi program transport stream to a single program stream.

programmatically stream audio with NetStream

In Flex you can stream microphone audio to an FMS/Red5 server using NetStream.attachAudio, which requires a Microphone object. Is it possible to stream audio through the NetStream from somewhere other than a Microphone? For example, from a file/embedded resource?
The reason I'm asking is that I'd like to be able to run automated tests that don't require using an actual microphone.
Well, it looks like this isn't possible. My workaround is to use SoundFlower to route audio file playback (invoked outside of Flash) into a virtual microphone, which Flash then streams to the media server. From Flash's point of view, its just as if you were manually speaking into the mic.

Resources