HTTP Video Streaming - http

I have a server (not internet connected) that hosts a webpage with company data on an internal website. The server also contains videos (thousands of them) in a defined directory structure.
When a client connects I can display the videos to them on the internal website. The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow; the browser seems to be trying to download them in order to play them rather than stream them.
Is there a video streaming server that I could send a file path to and it would serve the video back to the client as a stream?
I guess this is essentially transcoding the video that I need done. I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
Sorry if i'm not being clear on my need. Send me a question if I haven't been clear on a point.

...the browser seems to be trying to download them in order to play them rather than stream them.
To echo what #Offbeatmammal said in the comments, if you're using MP4 files, you need to ensure the MOOV atom is at the beginning of the file. Without it, the browser doesn't know what byte offsets to request.
Ideally, encode your video files as fragmented. In FFmpeg:
ffmpeg -i ... -f mp4 -movflags frag_keyframe+empty_moov output.mp4
See also: https://stackoverflow.com/a/9734251/362536
That should allow the client to stream the MP4 files from any web server that supports HTTP/1.1 range requests. (Most all do, unless configured otherwise.)
However, there is another point to address:
The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow...
While fixing the streaming issue means the clients won't have to download the whole file first, they still need the bandwidth to keep up with the stream. If it's possible they won't, you'll want to implement some sort of transcoder.
I would recommend using an existing segmented streaming method such as DASH or HLS. HLS is currently the most compatible, thanks to Apple's platform policies. Either will enable adaptive bitrate switching, which will allow slow clients to automatically switch to a lower bitrate stream that they can smoothly keep up with. That way, slower clients can still see the video, albeit a lower quality one, while fast clients can get the full quality video.
You can use FFmpeg to do the transcoding and HLS playlist creation.
I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
As for when you do this transcode, I suppose it depends on how much load you're looking at. If this is just one or two people viewing the file, you can transcode on demand if your servers can keep up. Ideally, you have at least a couple stream variants around for less popular files, and add more later if needed.
If you're doing this live, I'd recommend doing all of your transcoding up front. You can always prune old files/variants if you need the storage back.

Related

Accumulate media content from multiple requests

Firstly, I'm not sure if the title is clear enough. I just try to explain what happens to me and what I'm trying to do.
I'm trying to detect the direct link of media files (films, videos, etc...) from some website by using 'Firebug' (Firefox), 'Inspector' (Firefox) or other tools in Chrome. I also tried with Wireshark. It's pretty easy for some website because I can see the direct link from the request and by using programs like Quicktime, I can save the file to my local disk. For some other websites, I can see requests for streaming media content. However, the problem is that for the same file, there are many requests. It seems that after each several seconds, they use a different request to load, say 1.5Mb. When I copy one detected link to the address bar, the browser downloads a small file (media type), but the file cannot be played. Following is one example for using multiple requests for a same video:
My questions are:
- How can they decompose the video content into multiple requests? How can they accumulate the responses? What kind of Protocol used? (From Wireshark, it's TCP stream, but I'm not sure if it correct because I read somewhere that rtmp is common). I watched a video on YouTube about 'rtmpdump' but it isn't applicable in this case (in the attached image)
- Is there any client tool that can help us to accumulate multiple media responses?
Thank you very much.
This is an HLS stream. Look at the start of the playback for a .m3u8 file. This will list the URL for each file. Each file is a small piece of the total video, Usually 2 to 10 seconds. Each segment should be playable on its own assuming there is no DRM. It is deliver over HTTP. Hence the name HTTP-Live-STREAMING

video server supporting Http

I want to setup a video on demand server which support Http protocol. It is like Youtube, which hosts a lot of videos, and end users could play them from browser (by using Flash or Html 5).
Two quick questions,
For the big video files, shall I put them on disk or in memory? How Youtube or other big video site did it? Not sure if put all video in memory is too expensive, and put video on disk is too slow?
Is there any open source video hosting server for my purpose? If steaming is supported, it will be great.
thanks in advance,
George
If you just want to have an HTML page that links to your video files - no problem, but most browsers will download the entire file before you system even considers playing it.
If you want to stream the files (like YouTube and others do) then you aren't actually using HTTP for the video itself. HTTP is used to get the information about the stream so your player can stream and play directly without having to download the entire file first.
Streaming video uses RTSP (or some other streaming protocol) for the audio and video data.
The closest HTTP protocol can get to "streaming" video is to use Server-Push of individual image frame with each frame flagged to replace the previous frame. Not all browsers can handle this directly, but might need an ActiveX control or Java Applet. The original QuickTime did this before the streaming protocols were implemented at the servers.
re: how does YouTube deal with big video file
I suspect they are on disk until they are needed. Moved into memory only as needed. Flushed from memory when no longer needed.
re: is there an open source video server for my purpose
YES! Check out http://www.videolan.org/
-Jesse
another approach is to use HTTP Live Streaming - HLS - the web server is simply a standard httpd server - video/audio is preprocessed on server side into a set of bitrate playlists.
The logic is on the client side to retrieve the media as a series of 6 second files, based on bandwidth appropriate playlist.
So :
- use files not memory
- there are open source HLS segmentators (ffmpeg)

incremental http live streaming

I seek to use the http live streaming standard with video. I'd like to eliminate any delay while a user is working with our app, but the current architecture requires fully encoding audio with any new or removed video clips.
Is there an incremental encoding approach to http live streaming so that I can
keep the audio track separate, but playback seamlessly with the video stream
allows .ts chunks to be independently encoded and streamed back to a user faster than re-encoding an entire video
References:
https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming
https://developer.apple.com/streaming/
You could re-encode the required segments fairly easily -- there's no need to have the entire stream encoded before playing it (otherwise live events would be impossible). You have to be careful with the timestamps in the TS packets if you want it to be truly seamless. But what might be easiest is to use EXT-X-DISCONTINUITY markers around the re-created portions.
As for audio, there's no need to re-encode it. You should be able to just copy the encoded audio from one TS container to another. For example, if you're using ffmpeg, you would use -acodec copy to take it from the original ts.

flash/flex: progressive download vs. rtmp

I'm trying to understand and really pinpoint when to use progressive download vs. rtmp in flex/flash. It seems that the main point is that rtmp is not served with http, whereas progressive download is. Since it's not rtmp, the resource is protected since there is no way to connect to the rtmp server from outside the swf.
Even if the user can see that object code and can figure out the location
<object data="http://media.example.com/jw-player/player.swf" >
<param value="streamer=rtmp://sub.example.com/video
&file=1330/title/folder2/theflvresource.flv
&id=FlvPlayer" name="flashvars">
</object>
they would not be able to connect to rtmp. So rtmp seems to be more useful when you want to protect a resource? Is that all there is to it?
I agree with xtat, but want to add much more.
The pros and cons of RTMP (or any UDP-based streaming protocol) vs. 'progressive download' (which is really just a subset of HTTP-based streaming) in my not-so-humble opinion:
UDP-based streaming
Pros
Currently significantly more difficult to pilfer streams
Currently supports live, which HTTP-based does not
Multi-cast capable, which can be desirable on intranets
Cons
Dramatically higher resource usage, relative to http-based approach
Need for specialized servers (FMS, Red5, Wowza, whatever)
More noticeable buffering
Firewall issues, especially with corporate customers
HTTP-based streaming
Pros
Dead simple
Can seek into media. FLV and MP4 (with some effort)
Cons
Trivial to pilfer streams. E.g.: Real Downloader
Live streams not currently possible, but give it a year. Apple is making this a reality.
no multi-casting
The entire HTTP-based approach is filled with and/but/if situations, lots of misunderstandings about what is and is not possible, and a lack of common definitions.
There are two basic characteristics people are looking at when discussing HTTP-based streaming: seeking and regulated bandwidth. From that, we get all these terms like 'pseudo-streaming', 'progressive download', etc.
These are the definitions I use to describe HTTP-based streaming servers:
regulated bit-rate: The flat media file is parsed by the server, and it send media as fast as the player needs to play the media without buffering.
seeking: the ability of a web-server to seek into the media and effectively create a new 'file' on the fly for use by the client. Similar to an http byte-range request, except that headers and media meta data are added/modified.
progressive download: Just send the file, as fast as possible. Basically, put media file on web server that sends to client in a 'dumb' manner, like like a large .iso or .zip file.
pseudo streaming: the ability of a web server to send media files to the client with a regulated bit-rate and to seek into files.
Personally, the main reason to choose RTMP over progressive download is it allows your user to skip to the middle of a video without having to download the whole file.
These days unless you need to record, there's not really any point to using RTMP. HTTP is simpler and obviously much more widely supported, easier to debug and indeed it does allow for seeking, even over CDNs. This is what I have set up at Viddler.

Is it possible to downsample an audio stream at runtime with Flash or FMS?

I'm no expert in audio, so if any of you folks are, I'd appreciate your insights on this.
My client has a handful of MP3 podcasts stored at a relatively high bit rate, and I'd like to be able to serve those files to her users at "different" bit rates depending on that user's credentials. (For example, if you're an authenticated user, you might get the full, unaltered stream, but if you're not, you'd get a lower-bit-rate version -- or at least a purposely tweaked lower-quality version than the original.)
Seems like there are two options: downsampling at the source and downsampling at the client. In this case, knowing of course that the source stream would arrive at the client at a high bit rate (and that there are considerations to be made about that, which I realize), I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Is doing so possible with the Flash Player and ActionScript alone, at runtime (even with a third-party library), or does a scenario like this one require a server-based solution? If the latter, can Flash Media Server handle this requirement specifically? Again, I'd like to avoid using FMS if I can, since she doesn't really have the budget for it, but if that's the only option and it's really an option, I'm open to considering it.
Thanks in advance...
Note: Please don't question the sanity of the request -- I realize it might sound a bit strange, but the requirements are what they are. In that light, for purposes of answering the question, you can ignore the source and delivery path of the bits; all I'm really looking for is an explanation of whether (and ideally how) a Flash client can downsample an MP3 audio stream at runtime, irrespective of whether the audio's arriving over a network connection or being read directly from disk. Thanks much!
I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Please elucidate the reasons, because resampling on the client end would normally be considered crazy: wasting bandwidth sending the higher-quality version to a user who cannot hear it, and risking a canny user ripping the higher-quality stream at it comes in through the network.
In any case the Flash Player doesn't give you the tools to process audio, only play it.
You shouldn't need FMS to process audio at the server end. You could have a server-side script that loaded the newly-uploaded podcasts and saved them back out as lower-bitrate files which could be served to lowly users via a normal web server. For Python see eg. PyMedia, py-lame; or even a shell script using lame or ffmpeg or something from the command line should be pretty easy to pull off.
If storage is at a premium, have you looked into AAC audio? I believe Flash 9 and 10 on desktop browsers will play it. AAC in my experience takes only half of the size of the comparable MP3 (i.e. a 80kbps AAC will sound the same as a 160kbps MP3).
As for playback quality, if I recall correctly there's audio playback settings in the Publish Settings section in the Flash editor. Wether or not the playback bitrate can be changed at runtime is something I'm not sure of.

Resources