Video streaming, How it works? - networking

As i starting to work with video streaming, i've got a question:
Video streaming is the process of breaking video file into small data packages that are sent over network. But where do they stored and what happen with it after streaming was finished? I am asking because unlike from download, streaming does not keep the file locally, that's how it described in internet. What is the process of handling stream buffers under the hood. Can someone point me into right direction?
Any help appreciated
Thanks

Most video streams are actually HTTP request and response based - i.e. he client (player) request the video chunk by chunk and then plays it as it receives each chunk.
To answer your question what happens to the chunks when they are downloaded, this will depend on the player and the device. In general the chunks will be rebuilt into the particular video container that is being used, e.g. mp4, and then played.
How long they are stored will depend on the device and the players caching rules and capacity.

Related

does multipart/form-data sends the whole file data at one go or in a stream

I have a requirement of uploading a large file over HTTP to a remote server.
I am researching on how to send the data using multipart/form-data.
I have gone through How does HTTP file upload work? and understood how it separates the file data using boundaries.
I wanted to know whether all the file data is sent at one go or is streamed with several requests to the remote server.
Because if it is sent at one go, it is not possible to read the whole data at the remote server and write it to a file.
But if it streamed, how does the remote server parses the streamed data, write this streamed data to a file and redo the same thing till all the data is streamed.
Sorry if it a noob question, I am researching about it as well.
Maybe it is outside the scope of multipart/form-data and HTTP is itself taking care of.
Any help is appreciated.
The logistics of the sending is not relevant. What matters is the maximum request size that is set on the server side. How it is set depends on the technology used there: IIS, Apache, nginx? If the post request of the browser exceeds that size (because of a too large file), errors will happen. There is nothing on the browser side u can tweak or change to fix breaking uploads. Unless you are building your own browser:-)

How much data to send via TCP/IP at once

I've written a small program with the boost asio library to transfer files via TCP from a server to one or more clients.
During testing I found out that the transfer is extremely slow, about 10KiB/s. Nagle's algorithm is already disabled. If I transfer the same file via FileZilla from the same server to the same client, I get about 280KiB/s, so obviously something was very wrong.
My approach so far was to fragment each file into smaller packets of 1024 bytes, send one fragment(each fragment=1 async_write-call) to the client and wait for the client's response. I need to fragment the data to allow the client to keep track of the download progress and speed. In retrospect I suppose this was rather naïve, because the server has to wait for the client's response after each fragment. To check if this was the bottleneck, I've increased the fragment size twice, giving me the following results:
a) Fragment Size: 1024bytes
Transfer Speed: ~10KiB/s
b) Fragment Size: 8192bytes
Transfer Speed: ~80KiB/s
c) Fragment Size: 20000bytes
Transfer Speed: ~195KiB/s
The results speak for themselves, but I'm unsure what to do now.
I'm not too familiar with how the data transfer is actually handled internally, but if I'm not mistaken all of my data is basically added onto a stream? If that's the case, do I need to worry about how much data I write to that stream at once? Does it make a difference at all whether I use multiple write-calls with small fragments as opposed to one write-call with a large fragment? Are there any guidelines for this?
Simply stream the data to the client without artificial packetization. Reenable nagling, this is not a scenario that calls for disabling it. It will cause small inefficiencies to have it disabled.
Typical write buffer sizes would be 4KB and above.
The client can issue read calls to the network one after the other. After each successful read the client will have a new estimation for the current progress that is quite accurate. Typically, there will be one succeeding read call for each network packet received. If the incoming rate is very high then multiple packets tend to be coalesced into one read. That's not ap roblem.
If that's the case, do I need to worry about how much data I write to that stream at once?
No. Just keep a write call outstanding at all times.

How do I set up a live audio streaming http server?

I was hoping to build an application that streams audio (mp3, ogg, etc.) from my microphone to a web browser.
I think I can use the html5 audio tag to read/play the stream from my server.
The area I'm really stuck on is how to setup the streaming http endpoint. What technologies will I need, and how should my server be structured to get the live audio from my mic and accessible from my server?
For example, for streaming mp3, do I constantly respond with mp3 frames as they are recorded?
Thanks for any help!
First off, let's split this problem up into a few parts. You have the audio capture (recording), the encoding/codec, the server, and the receiving clients.
Capture -> Codec -> Server -> Several Clients
For audio capture, you will need to use the Web Audio API along with getUserMedia. This will allow you to get 32-bit floating point PCM samples from the recording device. This data stream takes up a ton of bandwidth... a few megabit for a stereo stream. This stream is not directly playable in an HTML5 audio tag, and while you could play it on the receiving end with the Web Audio API, it takes up too much bandwidth to be useful. You need to use a codec to get the bandwidth usage down.
The codecs you want to look at include MP3, AAC (and its variants such as HE-AAC), and Opus. Not all browsers support all codecs. MP3 is the most widely compatible but AAC provides better quality for a given bitrate. Opus is a free and open codec but still doesn't have the greatest client adoption. In any case, there isn't yet a codec that you can run in-browser with any real stability. (Although it's being worked on! There are a lot of test projects made with Emscripten.) I solved this problem by reducing the bit depth of my samples to 16-bit signed integers and sending this PCM stream to a server to do the codec, over a binary websocket.
This encoding server took the PCM stream and ran it through a codec server-side. Here you can use whatever you'd like, such as a licensed codec binary or a tool like FFmpeg which encapsulates multiple codecs.
Next, this server streamed the data to a real streaming media server like Icecast. SHOUTcast and Icecast servers take the encoded stream and relay it to many clients over an HTTP-like connection. (Icecast is HTTP compliant whereas SHOUTcast is close but not quite there which can cause compatibility issues.)
Once you have your streaming server set up, it's as simple as referencing the stream URL in your <audio> tag.
Hopefully that gets you started. Depending on your needs, you might also look into WebRTC which does all of this for you but doesn't give you options for quality and also doesn't scale beyond a few users.

Characterize network camera stream

I am trying to play a network camera stream in an application, but first I need to identify how to access the stream. Unfortunately, the manufacturer seems to prefer that I access the local web page of the camera and use it's built-in viewer, so there's no documentation on how to access the raw stream with another application.
For starters, I opened up the camera's web viewer and captured the connection traffic, trying to identify an address to point a player at. Here's the traffic that caught my attention: (IP edited)
GET http://127.0.0.1/mpeg4 HTTP/1.1\r\n
So I point Chrome (with VLC) at 127.0.0.1/mpeg4 and I get the VLC plugin. The tab is "busy" downloading but it never stops or plays the stream. My thought is that the plugin thinks the stream is a file, so it waits for end of file to play, which never comes.
Then I switched to VLC standalone and pointed it at the same address with the same results. No errors, but there's no buffering indicator or progress.
IE pointed to the address wants to download a file called mpeg4.mpeg from 127.0.0.1, but again it keeps downloading infinitely.
So with that backstory, my question is: How can I detect exactly what this stream is and how to play it with VLC?

http streaming

is http streaming possible without using any streaming servers?
Of course. You can output and flush, it gets to client before you end the script, thus it's streaming.
For live streaming, only segmented, like Apple HLS, other variants of segmented HLS (like OSMF) are not widely supported at the moment.
IIS from microsoft can also do smooth streaming (and Apple HLS as well).
Apple HLS can be supported on any web server when you pre-segment stream to chunks and just upload to web server path.
For VoD streaming, there is lot's of modules for all web servers.
Yes, although libraries have varying level of support. What needs to be used is "http chunking", such that lib does not try to buffer the whole request/response in memory (to computed the content length header) and instead indicate content comes in chunks.
Yes,not only its possible but has been implemented by various media server companies, only reason they still make usage of servers because of commercial purpose. Basically the content you want to stream should be divided into chunks/packets and then client machine can request those chunks via simple HTTP Get Requests.
Well if you have WebSockets available you can actually get quite low-latency streaming for low-fps scenarios by sending video frames as jpegs.
You can also send audio separately and play it using WebAudio on your browser. I imagine it could work for scenarios where you do not require perfect audio-video sync.
Another approach is to stream MPEG chunks through WebSockets, decode them in JS using jsmpeg and render to a canvas. You can find more here (video-only):
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
yes, the answer to your problem with http streaming is MPEG-DASH tech

Resources