I'm working on a video/audio conference project and i have the next problem:
I record sound with DirectSound and send through the network(multicast) everytime the audio buffer is full(aprox. 200 milliseconds) of pcm raw format.
Using the DirectX.Capture project(Code project) i'm sending images through the network(multicast).
You have any ideas how to synchronize these two streams?On Lan i have no problem with synchronization,but i think on the internet will be some problems because of the differences of net speed between peers,routing,etc.
Thank you!
Related
As i starting to work with video streaming, i've got a question:
Video streaming is the process of breaking video file into small data packages that are sent over network. But where do they stored and what happen with it after streaming was finished? I am asking because unlike from download, streaming does not keep the file locally, that's how it described in internet. What is the process of handling stream buffers under the hood. Can someone point me into right direction?
Any help appreciated
Thanks
Most video streams are actually HTTP request and response based - i.e. he client (player) request the video chunk by chunk and then plays it as it receives each chunk.
To answer your question what happens to the chunks when they are downloaded, this will depend on the player and the device. In general the chunks will be rebuilt into the particular video container that is being used, e.g. mp4, and then played.
How long they are stored will depend on the device and the players caching rules and capacity.
I was hoping to build an application that streams audio (mp3, ogg, etc.) from my microphone to a web browser.
I think I can use the html5 audio tag to read/play the stream from my server.
The area I'm really stuck on is how to setup the streaming http endpoint. What technologies will I need, and how should my server be structured to get the live audio from my mic and accessible from my server?
For example, for streaming mp3, do I constantly respond with mp3 frames as they are recorded?
Thanks for any help!
First off, let's split this problem up into a few parts. You have the audio capture (recording), the encoding/codec, the server, and the receiving clients.
Capture -> Codec -> Server -> Several Clients
For audio capture, you will need to use the Web Audio API along with getUserMedia. This will allow you to get 32-bit floating point PCM samples from the recording device. This data stream takes up a ton of bandwidth... a few megabit for a stereo stream. This stream is not directly playable in an HTML5 audio tag, and while you could play it on the receiving end with the Web Audio API, it takes up too much bandwidth to be useful. You need to use a codec to get the bandwidth usage down.
The codecs you want to look at include MP3, AAC (and its variants such as HE-AAC), and Opus. Not all browsers support all codecs. MP3 is the most widely compatible but AAC provides better quality for a given bitrate. Opus is a free and open codec but still doesn't have the greatest client adoption. In any case, there isn't yet a codec that you can run in-browser with any real stability. (Although it's being worked on! There are a lot of test projects made with Emscripten.) I solved this problem by reducing the bit depth of my samples to 16-bit signed integers and sending this PCM stream to a server to do the codec, over a binary websocket.
This encoding server took the PCM stream and ran it through a codec server-side. Here you can use whatever you'd like, such as a licensed codec binary or a tool like FFmpeg which encapsulates multiple codecs.
Next, this server streamed the data to a real streaming media server like Icecast. SHOUTcast and Icecast servers take the encoded stream and relay it to many clients over an HTTP-like connection. (Icecast is HTTP compliant whereas SHOUTcast is close but not quite there which can cause compatibility issues.)
Once you have your streaming server set up, it's as simple as referencing the stream URL in your <audio> tag.
Hopefully that gets you started. Depending on your needs, you might also look into WebRTC which does all of this for you but doesn't give you options for quality and also doesn't scale beyond a few users.
After reasearching for a few days, i m still lost with this issue:
I have a webcam connected over WiFi to my Android device.
I wrote an Android app to connect to a specified Socket of the webcam (IP and port). From this Socket i get an InputStream which is already encoded in H.264. Then i redirect this InputStream from the android device to my server, where i managed to decode it to images/frame by using Xuggler.
I would like to stream my webcam live to the internet to a flash player or something.
I know i have to use Wowza, FMS or RED5 for this.
My problem is, that i dont understand how to proceed with the InputStream i have. All examples i ve read need a mp4/flv or other container file to stream from... but i have a continuous live InputStream.
Some other examples consider using Flash Encoder. But my InputStream is already encoded in H.264.
This is a general understanding question. Please advise me on how to solve this.
Thank you
you have following options -
Encode in flv container. Yes you can transmit live stream using using flv container. You can set the 'duration' field in the header to be arbitrary long. e.g youtube use this trick for live streaming.
you can encode the stream into RTMP. ffmpeg has code for rtmp code which can be used for understand, or i believe there are other opensource rtmp muxers available.
convert the stream into HLS, there are flash based HLS player available.
why flash if I may ask, hope you know that HTML5 video tag now directly accepts h264 encoded videos.
see if i have one media file like .mkv or .mp3 now i am playing it on my computer by using streaming. Now i want to ask you when data comes from server to client at that time who is going to do demuxing?
Does at server side any demuxer program open file & give frames & then
streaming application make packet of that demuxed frame and then transmit
it to client ?
or
At server side streaming application just read any media file in chunks of some
bytes & transmit to client then client side one demuxer program parse that & find
real frames from that and play?
Definitely later, I'm not sure why would server do any sort of work regarding understanding of data it sends. It would just burn cpu cycles on server, and I can't see any benefit of doing that.
I have a video that needs to be delivered through streaming, but all viewers need to be synchronized at the same time regardless of when they started the video. If the video starts streaming at 7:00 and someone visits the page at 7:05, they should see the footage at 7:05 and onwards.
Does Red5 or Flash Media Server or any other streaming server have a feature to handle this? or is this something that needs to be handled by the player?
regardless of how you load an active stream in Flash, it will start at the beginning of the file stream. For real-time streams that is the moment the user joins the stream since the file stream starts at that moment.