flash/flex: progressive download vs. rtmp - apache-flex

I'm trying to understand and really pinpoint when to use progressive download vs. rtmp in flex/flash. It seems that the main point is that rtmp is not served with http, whereas progressive download is. Since it's not rtmp, the resource is protected since there is no way to connect to the rtmp server from outside the swf.
Even if the user can see that object code and can figure out the location
<object data="http://media.example.com/jw-player/player.swf" >
<param value="streamer=rtmp://sub.example.com/video
&file=1330/title/folder2/theflvresource.flv
&id=FlvPlayer" name="flashvars">
</object>
they would not be able to connect to rtmp. So rtmp seems to be more useful when you want to protect a resource? Is that all there is to it?

I agree with xtat, but want to add much more.
The pros and cons of RTMP (or any UDP-based streaming protocol) vs. 'progressive download' (which is really just a subset of HTTP-based streaming) in my not-so-humble opinion:
UDP-based streaming
Pros
Currently significantly more difficult to pilfer streams
Currently supports live, which HTTP-based does not
Multi-cast capable, which can be desirable on intranets
Cons
Dramatically higher resource usage, relative to http-based approach
Need for specialized servers (FMS, Red5, Wowza, whatever)
More noticeable buffering
Firewall issues, especially with corporate customers
HTTP-based streaming
Pros
Dead simple
Can seek into media. FLV and MP4 (with some effort)
Cons
Trivial to pilfer streams. E.g.: Real Downloader
Live streams not currently possible, but give it a year. Apple is making this a reality.
no multi-casting
The entire HTTP-based approach is filled with and/but/if situations, lots of misunderstandings about what is and is not possible, and a lack of common definitions.
There are two basic characteristics people are looking at when discussing HTTP-based streaming: seeking and regulated bandwidth. From that, we get all these terms like 'pseudo-streaming', 'progressive download', etc.
These are the definitions I use to describe HTTP-based streaming servers:
regulated bit-rate: The flat media file is parsed by the server, and it send media as fast as the player needs to play the media without buffering.
seeking: the ability of a web-server to seek into the media and effectively create a new 'file' on the fly for use by the client. Similar to an http byte-range request, except that headers and media meta data are added/modified.
progressive download: Just send the file, as fast as possible. Basically, put media file on web server that sends to client in a 'dumb' manner, like like a large .iso or .zip file.
pseudo streaming: the ability of a web server to send media files to the client with a regulated bit-rate and to seek into files.

Personally, the main reason to choose RTMP over progressive download is it allows your user to skip to the middle of a video without having to download the whole file.

These days unless you need to record, there's not really any point to using RTMP. HTTP is simpler and obviously much more widely supported, easier to debug and indeed it does allow for seeking, even over CDNs. This is what I have set up at Viddler.

Related

HTTP Video Streaming

I have a server (not internet connected) that hosts a webpage with company data on an internal website. The server also contains videos (thousands of them) in a defined directory structure.
When a client connects I can display the videos to them on the internal website. The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow; the browser seems to be trying to download them in order to play them rather than stream them.
Is there a video streaming server that I could send a file path to and it would serve the video back to the client as a stream?
I guess this is essentially transcoding the video that I need done. I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
Sorry if i'm not being clear on my need. Send me a question if I haven't been clear on a point.
...the browser seems to be trying to download them in order to play them rather than stream them.
To echo what #Offbeatmammal said in the comments, if you're using MP4 files, you need to ensure the MOOV atom is at the beginning of the file. Without it, the browser doesn't know what byte offsets to request.
Ideally, encode your video files as fragmented. In FFmpeg:
ffmpeg -i ... -f mp4 -movflags frag_keyframe+empty_moov output.mp4
See also: https://stackoverflow.com/a/9734251/362536
That should allow the client to stream the MP4 files from any web server that supports HTTP/1.1 range requests. (Most all do, unless configured otherwise.)
However, there is another point to address:
The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow...
While fixing the streaming issue means the clients won't have to download the whole file first, they still need the bandwidth to keep up with the stream. If it's possible they won't, you'll want to implement some sort of transcoder.
I would recommend using an existing segmented streaming method such as DASH or HLS. HLS is currently the most compatible, thanks to Apple's platform policies. Either will enable adaptive bitrate switching, which will allow slow clients to automatically switch to a lower bitrate stream that they can smoothly keep up with. That way, slower clients can still see the video, albeit a lower quality one, while fast clients can get the full quality video.
You can use FFmpeg to do the transcoding and HLS playlist creation.
I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
As for when you do this transcode, I suppose it depends on how much load you're looking at. If this is just one or two people viewing the file, you can transcode on demand if your servers can keep up. Ideally, you have at least a couple stream variants around for less popular files, and add more later if needed.
If you're doing this live, I'd recommend doing all of your transcoding up front. You can always prune old files/variants if you need the storage back.

Can gRPC be used for audio-video streaming over Internet?

I understand in a client-server model gRPC can do a bidirectional streaming of data.
I have not tried yet, but want to know will it be possible to stream audio and video data from a source to cloud server using gRPC and then broadcast to multiple client, all in real time ?
TLDR: I would not recommend video over gRPC. Primarily because it wasn't designed for it, so doing it would take a lot of hacking. You should probably take a look at WebRTC + a specific video codec.
More information below:
gRPC has no video compression
When sending video, we want to send things efficiently because sending it raw could require 1GB/s connectivity.
So we use video compression / video encoding. For example, H.264, VP8 or AV1.
Understand how video compression works (eg saving bandwidth by minimising similar data shared between frames in a video)
There is no video encoder for protobufs (the format used by gRPC).
You could then try image compression and save the images in a bytes field (e.g. bytes image_frame = 1;, but this is less efficient and definitely takes up unnecessary space for videos.
It's probably possible to encode frames into protobufs using a video encoder (e.g. H.264) and then decode them to play in applications. However, it might take a lot of hacking/engineering effort. This use case is not what gRPC/protobufs is designed for and not commonly done. Let me know if you hack something together, I would be curious.
gRPC is reliable
gRPC uses TCP (not UDP), which is reliable.
At a glance, reliability might be handy, to avoid corrupting data or lost data. However, depending on the use case (realtime video or audio calls), we may prefer to skip frames if they are dropped or delayed. The losses may be unnoticeable or painless to the user.
If the packet is delayed, it will wait for the packet before playing the rest. (aka. out of order delivery)
If the packet is dropped, it will resend it (aka. packet loss)
Therefore, video conferencing apps usually use WebRTC/RTP (configured to be unreliable)
Having mentioned that, looks like Zoom was able to implement Video-over-WebSockets, which is also a reliable transport (over TCP). So it's not "game-over", just highly not recommended and a lot more effort. They have moved over to WebRTC though.
Data received on the WebSockets goes into a WebAssembly (WASM) based decoder. Audio is fed to an AudioWorklet in browsers that support that. From there the decoded audio is played using the WebAudio “magic” destination node.

What is the difference between in FTP and HTTP?

HTTP is used to display the info and also can be used to transfer files from one host to another host.
FTP is used to transfer files from one host to another.
So I come to this point that FTP and HTTP both are almost doing the same work. Then what is the exact benefit of using FTP while I can do this with the HTTP?
Correct me if I am wrong.
Thanks
FTP is a File Transfer Protocol, for transferring files.
FTP is significantly older, it is a protocol designed to enable the transfer of files over a long-running session. There are a wide array of commands and the intent is to allow you to navigate and browse a remote file system and retrieve files (originally over a separate data connection).
FTP still sees a lot of use, but many files are actually transferred over HTTP instead.
HTTP
The HyperText Transfer Protocol was originally designed to transfer hypertext documents and the various assets needed to render them. In practice, this is the way information is transferred on the web -- html, css, images, data are all transferred between web servers and web browsers, as well as between one server and another this way.
HTTP was designed to retrieve a resource from a URL that may or may not match the remote file system (in many web apps, the structure of the URLs has very little to do with the file locations). There is often only a single request in a single http connection and the data uses the same connection as the request.
So I come to this point that FTP and HTTP both are almost doing the same work.
Not really. FTP can be used for file transfer and not really much more. HTTP is way more flexible since it not only transfers byte streams but also meta data (what kind of data is this), supports implicit compression, client specific responses (like based on supported languages), has more flexible ways for authentication, is tuned for less overhead (i.e. can be faster) ...
Then what is the exact benefit of using FTP while I can do this with the HTTP?
There is no real benefit of FTP today. In contrary, in contrast to alternatives like HTTP the design of FTP leads to lots of problems in today's infrastructure where NAT is heavily used (i.e. multiple internal systems behind a single router with public IP address).
FTP remains mostly in places where clients or servers don't support more modern ways for file exchange. A typical example is cheap web hosting where access to the server to update files is often done by FTP since lots of tools have FTP builtin and it is easy to setup on the server too. Alternatives like WebDAV (HTTP based) or SFTP (SSH based) are less used here since they have less support in clients and servers even though they would offer more security and more flexibility and less problems.

On-the-fly video streaming over http?

I'm building an application that will serve up video files to users on a variety of different platforms. As such, I need the ability to set up a server that will serve up video files that might need to be transcoded into a number of different formats. Basically, I want to replicate the functionality that TVersity provides.
The ideal solution would allow me to access the video stream via http, specifiying some sort of transcoding parameters in the call.
Anyone have any good ideas?
Thanks!
Chris
HTTP is not a streaming protocol. Have a look at progressive download - there are lots of PHP implementations / flash players available. ffmpeg is a good tool for converting formats / size / frame rates etc.

TCP Vs. Http Benchmark

I am having a Web application sitting on IIS, and talking with [remote]Service-Machine.
I am not sure whether to choose TCP or Http, as the main protocol.
more details:
i will have more than one service\endpoint
some of them will be one-way
the other will be two-ways
the web pages will work infront of the services
we are talking about hi-scale web-site
I know the difference pretty well, but I am looking for a good benchmark, that shows how much faster is the TCP?
HTTP is a layer built ontop of the TCP layer to some what standardize data transmission. So naturally using TCP sockets will be less heavy than using HTTP. If performance is the only thing you care about then plain TCP is the best solution for you.
You may want to consider HTTP because of its ease of use and simplicity which ultimately reduces development time. If you are doing something that might be directly consumed by a browser (through an AJAX call) then you should use HTTP. For a non-modern browser to directly consume TCP connections without HTTP you would have to use Flash or Silverlight and this normally happens for rich content such as video and/or audio. However, many modern browsers now (as of 2013) support API's to access network, audio, and video resources directly via JavaScript. The only thing to consider is the usage rate of modern web browsers among your users; see caniuse.com for the latest info regarding browser compatibility.
As for benchmarks, this is the only thing I found. See page 5, it has the performance graph. Note that it doesn't really compare apples to apples since it compares the TCP/Binary data option with the HTTP/XML data option. Which begs the question: what kind of data are your services outputting? binary (video, audio, files) or text (JSON, XML, HTML)?
In general performance oriented system like those in the military or financial sectors will probably use plain TCP connections. Where as general web focused companies will opt to use HTTP and use IIS or Apache to host their services.
The question you really need an answer for is "will TCP or HTTP be faster for my application". The answer is that it depends on the nature of your application, and on the way that you use TCP and/or HTTP in your application. A generic HTTP vs TCP benchmark won't answer your question, because the chances are that the benchmark won't match your application behaviour.
In theory, an optimally designed / implemented solution using TCP will be faster than one that uses HTTP. But it may also be considerably more work to implement ... depending on the details of your application.
There are other issues that might affect your choice. For example, you are less likely to run into firewall issues if you use HTTP than if you use TCP on some random port. Another is that HTTP would make it easier to implement a load balancer between the IIS server and the backend systems.
Finally, at the end of the day it is probably more important that your system is secure, reliable, maintainable and (maybe) scalable than it is fast. A sensible strategy is to implement the simple version first, but have plans in your head for how to make it faster ... if the simple solution is too slow.
You could always benchmark it.
In general, if what you want to accomplish can be easily done over HTTP (i.e. the only reason you would otherwise think about using raw TCP is for a possible performance boost) you should probably just use HTTP. Sure, you can do socket programming, but why bother? Lots of people have spent a lot of time and effort building HTTP client libraries and servers, and they have spent waaaaaay more time optimizing and testing that code than you will ever be able to possibly spend on your TCP sockets. There are simply so many possible errors that you would have to handle, edge cases, and optimizations that can be done, that it is usually easier and safer to use a library function for HTTP.
Plus, the HTTP specs define all kinds of features (and clients/servers implement, which you get to use "for free", i.e. no extra implementation work) which makes any third-party interoperability that much easier. "Here is my URL, here are the rules for what you send, here are the rules for what I return..."
I have a Self Hosted Windows native C++ server application that I use the Casablanca C++ REST SDK code in. I can use any client C#, JavaScript, C++, cURL, basically anything that can send a POST, GET, PUT, DEL message can be used to send request messages to this self hosted windows app. Also I can use a plain browser address bar to do GET related requests using various parameters. Currently I only run this system on a private intranet so it is very fast - I haven't benchmark it against just doing raw TCP, but on a private intranet I doubt there would be even a few microseconds difference? For the convenience and ease of development and ability to expand to full blown internet app it's a dream come true. It is a dedicated system with a private protocol using small JSON packets so not certain if that fits your application needs or not? Another nice thing is this Windows application native C++ code could be ported fairly easily to run on Linux/MacOS as the Casablanca REST SDK is portable to those OSes.

Resources