Encoding videos for use with Adobe Live Streaming - adobe

I have an original video coded at 20Mbps, 1920x1080, 30fps and want to convert it down to be 640x480 30fps at a range of (3 different) bitrates for use by Adobe Live Streaming.
Should I use ffmpeg to resize and encode at the 3 bitrates then use f4fpackager to create the f4m f4f and f4x files or just use ffmpeg to reduce the resolution and then f4fpackager to encode the relevant bitrates?
I've had several tries so far, but when encoded the videos seem to play at a much larger bitrate than they've been encoded at. For example, if I set up the OSMF to play from my webserver, I'd be expecting my best encoded video to play at 1,500kbps but it's way above that.
Has anyone had any experience of encoding for use like this?
I'm using the following options to f4fpackager
--bitrate=1428 --segment-duration 30 --fragment-duration 2

f4fpackager doesn't do any encoding, it does 2 things:
- fragment the mp4 files (mp4 -> f4f)
- generate a Manifest (f4m) file referencing all you fragmented files (f4f)
So the process is:
- transcode your source file in all the size/bitrate that you want to provide (eg: 1920x01080#4Mbps, 1280x720#2Mbps, etc)
- use f4fpackager to convert the mp4 to f4f (this is the fragmentation step)
- use f4fpackager to generate the Manifest.f4m referencing the files that you generated in the previous step
the --bitrate option of f4fpackager should match the value that you use with ffmpeg, this parameter is used to generate the manifest file with the correct bitrate value of each quality

Related

Http video streaming needs to download full video data to start play?

In my server two mp4 file is available. When I browse the video1.mp4 url from my browser, it started to play the video file in browser.
If I play video2.mp4 url from browser,it takes long to start playing.
At that time I had checked the browsers temp file, It downloads the full video then only it starts to play.
After clearing the temp file i had tried to play the video1. It takes some amount of file only.(Video1 size is 800 MB, temp memory has 50 MB only, Video2 size is 500 MB, temp memory also has the 500 MB)
What is the difference between two video files. Both are MP4 only. But one takes full video data and another one takes partial amount of video file, why ?
The two files are encoded differently. MP4 files are divided in packets called boxes, the box that describes the type of compression and the different tracks that are present in the video file is a 'moov' box, and traditionally it lives at the end of the file, but encoding software can be configured to generate it at the beginning.
For example, if you use ffmpeg you can use the qt-faststart option to enable putting the metadata at the beginning.

Why do different scanned images have same size in file transfer mode?

I am developing a scanner application in C++. Currently I am able to scan the documents and get the images in file transfer mode. But all the scanned documents have same size even though the content of the documents are different.
FileFormat:TWFF_TIFF
Pixel flavout: TWPF_CHOCOLATE
Xresoultion:75
Yresoultion:75
ICAP_UNITS: TWUN_INCHES
ICAP_PIXELTYPE: TWPT_GRAY
ICAP_BRIGHTNESS:0
ICAP_CONTRAST:0
ICAP_BITDEPTH: 8
Every time scanned image size as 327kb. Why would this be?
Also, how can I set JPEG_Compression. Does file transfer mode supports JPEG_compression?
Probably your scanner/driver is writing uncompressed TIFF files, so the file size depends only on the dimensions of the image. If each image is the same width & height, the resulting files will be the same size.
All the file-transfer stuff in TWAIN is implemented by the driver (not TWAIN itself) and all the features are optional. So you need to check if your scanner/driver supports JPEG compression when transferring TIFF files. It might, it might not.
You can try setting ICAP_COMPRESSION to TWCP_JPEG, after setting ICAP_IMAGEFILEFORMAT to TWFF_TIFF. Probably if both succeed you will get JPEG compression in your TIFFs, although it might be either "Old Style" JPEG or "New Style" JPEG. If you don't know what that means, you probably should find out.
I wrote a tool for this kind of experimenting, years ago, still maintained and free from Atalasoft: Twirl TWAIN Probe
Caution: Many scanners don't support File Transfer Mode (it is optional) and those that do may not support the TIFF file format (the only required file format is BMP!) If you need to support a wide variety of scanners, you'll have to use TWAIN's Native Transfer Mode or Memory Transfer Mode, and write the images to file yourself e.g. using LibTiff.

Is there an indicator other than file extension that indicates the file type?

I am trying to make .txt file look like a .jpg file so it can be sent across wi-fi using an Eye-Fi SD card. The card only sends .jpg files for several reasons. One reason is that the transmission path of the picture from an the SD card to the computer looks like:
Camera writes pictures to EYE-FI SD -> EYE-FI connects to local router -> local router uploads to EYE-FI servers -> EYE-FI servers upload to your computer.
[Explanation]
There could be some filter on the server end, so I found some software that allows the user to bypass the eye-fi servers so now I know I am only dealing with the SD card. It's also nice to know that no one else is looking at my files. After some experimentation, I figured out that I can put .jpg files on the card and have them transmitted once a picture is taken. I also found how that the pictures must be named in short format; a name not longer than 8 characters(excluding file extensions), this probably has to do with the fact the card is formatted in fat32 (the card can be reformatted and still works). I tried uploading a .txt file to the card and gave it a similar format, and renamed it as a .jpg file. It did transfer which indicates to me there is probably something other than a file extension which denotes how the file is formatted.
[Questions]
1) Is there someway I can spoof .txt files to make them look like .jpg files?
2) Is there some kind of program I can use (for linux) to play around with values on the card so I can figure out what triggers an upload? Any ideas one what could trigger the upload?
1) Yes, there are hex value in the file that indicate it is a .jpg. If you open up a .jpg file with a hex editor, you will notice that there are header lines that have a bunch of information about how the image was compress, sometimes what made the image, some firmware information etc. In the editor, you can find the string "FF D8" this indicates the beginning of image file. This is followed shortly by "FF C0". The next 6 bytes contain information about the size of the image, and (I am guessing) is used by whatever software displays the image. The end of a jpg file is denoted by the 2 byes "FF D9". Fun fact, I played around with the jpg file I was using and it seems that you can put text after the "FF D9" and still have the jpg operate. I thought this was neat. Source
None of this was needed to get the eye-fi to upload the file though. As I said in my question, the card needs the name of the file to be in short format (which means the title cannot have more than 8 characters) and needs to have an acceptable file extension, in my case I used ".jpg". I wrote a text file, and just saved it as a "text.jpg". I found that there is a minimum size required in order to transfer the file, which is strange.
My hex editor of choice for this was bless, it is good for opening files, but I have yet to figure out if it can open volumes. It doesn't seem like it can.

GDCL Mpeg-4 Multiplexor Problem

I just create a simple graph
SourceFilter(*.mp4 file format) ---> GDCL MPEG 4 Mux Filter ---> File writer Filter
It works fine. But when the source is in h264 file format
SourceFilter( *.h264 file format) ---> GDCL MPEG 4 Mux Filter---> File writer Filter
It record a file but the recorded file does not play in VLC Player, QuickTime, BS Player, WM Player.
What i am doing wrong? Any ideas to record h264 video source? Do i need H264 Mux?
Best Wishes
PS: i JUST want to record video by the way...Why i need a mux?
There are two H.264 formats used by DirectShow filters. One is Byte Stream Format, in which each NALU is preceded by a start code 00 00 01. The other is the format used within MP4 files, in which each start code is preceded by a length (the media type or the MP4 file metadata specifies how many bytes are used in the length field). The problem is that some FOURCCs are used for both formats.
The MP4 mux sample accepts either BSF or length-preceded data, depending on the subtype give. It does not attempt to work out which it is. Most likely, when you are feeding the H.264 elementary stream, you are giving the mux a FOURCC or media type that the mux thinks means length-prepended, when you are giving BSF data. Check in TypeHandler::CanSupport.
If you just want to save H.264 video to a file, you can use a Dump filter to just write the bits to a file. If you are saving BSF, this is a valid H.264 elementary stream file. If you want support for the majority of players, or if you want seeking support, then you will want to write the elementary stream into a container with an index, such as MP4. In this case, you need a mux, not for the multiplexing, but for the indexing and metadata creation.
G

get flv length before uploading to server

I'm using the FileReference class to upload flvs to a server.
Is it possible to check the flv length not size before allowing an upload?
Are you targeting Flash Player 10 alone or lower versions too? Because lower versions of Flash player (9 etc) do not allow the uploading SWF to read the contents of file (other than creationDate, creator (The Macintosh creator type of the file), modificationDate, name, size in bytes and type), so there is no way you are going to be able to do this on those players.
If you are targeting solely FP10 users, you can load the FLV into a ByteArray in your SWF and
Play it using an FLV player and read the duration property from the player. But I couldn't find an FLV player that takes a ByteArray as input - and after reading this thread in SO, it seems like that is not possible at all.
Parse the FLV file, and read the duration property from its metadata. The FLV file specification is open, but this isn't going to be easy.
Update to the comment:
Excerpts from the FLV file spec:
onMetaData
An FLV file can contain metadata with an “onMetaData” marker. Various stream properties
are available to a running ActionScript program via the NetStream.onMetaData property.
The available properties differ depending on the software used.
Common properties include:
duration: a DOUBLE indicating the total duration of the file in seconds
width: a DOUBLE indicating the width of the video in pixels
height: a DOUBLE indicating the height of the video in pixels
videodatarate: a DOUBLE indicating the video bit rate in kilobits per second
framerate: a DOUBLE indicating the number of frames per second
videocodecid: a DOUBLE indicating the video codec ID used in the file (see “Video
tags” on page 8 for available CodecID values)
audiosamplerate: a DOUBLE indicating the frequency at which the audio stream is
replayed
audiosamplesize: a DOUBLE indicating the resolution of a single audio sample
stereo: a BOOL indicating whether the data is stereo
audiocodecid: a DOUBLE indicating the audio codec ID used in the file (see “Audio
tags” on page 6 for available SoundFormat values)
filesize: a DOUBLE indicating the total size of the file in bytes
FLV file can contain metadata - it doesn't say it will contain metadata. It also says that available properties can vary based on the software used to create FLV. So I guess there is no guarantee (as per specs) that the duration property will be present. That said, duration is one of the basic properties of FLV and it would be safe to assume that any reasonable software would include it.
You can use Netstream.appendBytes to feed FileReference.data (after a call to browse, before a call to upload) to a NetStream used for playing a video. From there, the duration can be taken from the metadata, as described elsewhere on this thread. Note that at least Flash Player 10 is required for this approach.

Resources