I have a video that needs to be delivered through streaming, but all viewers need to be synchronized at the same time regardless of when they started the video. If the video starts streaming at 7:00 and someone visits the page at 7:05, they should see the footage at 7:05 and onwards.
Does Red5 or Flash Media Server or any other streaming server have a feature to handle this? or is this something that needs to be handled by the player?
regardless of how you load an active stream in Flash, it will start at the beginning of the file stream. For real-time streams that is the moment the user joins the stream since the file stream starts at that moment.
Related
As i starting to work with video streaming, i've got a question:
Video streaming is the process of breaking video file into small data packages that are sent over network. But where do they stored and what happen with it after streaming was finished? I am asking because unlike from download, streaming does not keep the file locally, that's how it described in internet. What is the process of handling stream buffers under the hood. Can someone point me into right direction?
Any help appreciated
Thanks
Most video streams are actually HTTP request and response based - i.e. he client (player) request the video chunk by chunk and then plays it as it receives each chunk.
To answer your question what happens to the chunks when they are downloaded, this will depend on the player and the device. In general the chunks will be rebuilt into the particular video container that is being used, e.g. mp4, and then played.
How long they are stored will depend on the device and the players caching rules and capacity.
I am trying to make a video streaming service by using nginx-rtmp for reading stream and sending a dash/hls streaming media
Used this link for live streaming media
I used obs for sending the stream to nginx
But it is a live stream and user cant seek the stream
so, is there a way so that users cana actually seek it
Or is there a better way to stream video using dash/hls
I was hoping to build an application that streams audio (mp3, ogg, etc.) from my microphone to a web browser.
I think I can use the html5 audio tag to read/play the stream from my server.
The area I'm really stuck on is how to setup the streaming http endpoint. What technologies will I need, and how should my server be structured to get the live audio from my mic and accessible from my server?
For example, for streaming mp3, do I constantly respond with mp3 frames as they are recorded?
Thanks for any help!
First off, let's split this problem up into a few parts. You have the audio capture (recording), the encoding/codec, the server, and the receiving clients.
Capture -> Codec -> Server -> Several Clients
For audio capture, you will need to use the Web Audio API along with getUserMedia. This will allow you to get 32-bit floating point PCM samples from the recording device. This data stream takes up a ton of bandwidth... a few megabit for a stereo stream. This stream is not directly playable in an HTML5 audio tag, and while you could play it on the receiving end with the Web Audio API, it takes up too much bandwidth to be useful. You need to use a codec to get the bandwidth usage down.
The codecs you want to look at include MP3, AAC (and its variants such as HE-AAC), and Opus. Not all browsers support all codecs. MP3 is the most widely compatible but AAC provides better quality for a given bitrate. Opus is a free and open codec but still doesn't have the greatest client adoption. In any case, there isn't yet a codec that you can run in-browser with any real stability. (Although it's being worked on! There are a lot of test projects made with Emscripten.) I solved this problem by reducing the bit depth of my samples to 16-bit signed integers and sending this PCM stream to a server to do the codec, over a binary websocket.
This encoding server took the PCM stream and ran it through a codec server-side. Here you can use whatever you'd like, such as a licensed codec binary or a tool like FFmpeg which encapsulates multiple codecs.
Next, this server streamed the data to a real streaming media server like Icecast. SHOUTcast and Icecast servers take the encoded stream and relay it to many clients over an HTTP-like connection. (Icecast is HTTP compliant whereas SHOUTcast is close but not quite there which can cause compatibility issues.)
Once you have your streaming server set up, it's as simple as referencing the stream URL in your <audio> tag.
Hopefully that gets you started. Depending on your needs, you might also look into WebRTC which does all of this for you but doesn't give you options for quality and also doesn't scale beyond a few users.
I have some output from a program I'd like to stream live to a html5 video tag. So far I've used VLC to capture the screen, transcode it to ogg, and stream it using its built-in http server. It works insofar that I see the desktop image in the browser window.
The catch is this: Every time I refresh the page, the video starts from the top, where I'd like to see only the current screen, so that I can use it to build a sort of limited remote desktop solution that allows me to control the ubuntu desktop program from the browser.
I was thinking websockets to send the mouse events to the program, but I'm stuck on how to get the live picture instead of the whole stream.
Thanks in advance!
If you are building server side as well, I would suggest handle that operation yourself.
What you can do, is use mjpeg for html streaming. And you can write server application that will accept http connections and will send header of mjpeg stream and then every update will send picture it self. That way you will have realtime stream in browser.
This option is good due to ability of having control over stream from server side, and for client side it is just tag with mjpeg.
Regarding WebSockets - yes you can build it, but you will have to implement input devices control on remote computer side.
Here is server of streaming MJPEG that might be interesting to you: http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server
While using RTMP if the request is tunneled through HTTP, how different it is from a HTTP request?
What would be the performance implications of tunneling while using RTMP?
The advantage of RTMP streams over the casual HTTP based progressive downloading is far too realistic to ignore
You can serve Flash Video over the Internet using RTMP, a special protocol for real-time server applications ranging from instant messaging to collaborative data sharing to video streaming. Whereas HTTP-delivered Flash Video is referred to as progressive download video, RTMP-delivered Flash Video is called streaming video. However, because the term streaming is so often misused, I prefer the term real-time streaming video.
One of the benefits of RTMP delivery for the viewer is near-instantaneous playback of video, provided the Flash Video file is encoded with a bitrate appropriate to the viewer's connection speed. Real-time streaming video can also be seeked to any point in the content. This feature is particularly advantageous for long-duration content because the viewer doesn't have to wait for the video file to load before jumping ahead, as is the case for HTTP-delivered video.
http://www.cisco.com/en/US/prod/collateral/video/ps11488/ps11791/ps11802/white_paper_c11-675935.html