BitmapData and JpegEncoder Limitations - apache-flex

I am trying to save out a large image from flash using bitmapdata and the jpegencoder. I am looking into the limitations of this process and have noticed you can only set bitmapdata pizel width and height to a certain amount and this might be flexible with what you set the jpegencoder quality to (1-100).
Does anyone know what the specific limitations of these two things are? I'm basically trying to see just how large of an image I can save out (because I need to use the image exported for printing purposes, so I need it as high quality as possible).
I have read articles that say in fp 10 you can render up to something like 16,000 px. But I tried an image that is 3500 x 3500 and it timed out. So not sure if this is correct information.

The image size limit up to Flash Player 9 is 2880x2880, Flash 10 increased this limit to 4096x4096. This applies also for the Stage, Sprites and MovieClips.
The quality used for the JPGEncoder class does not circumvent this limitation as this is tied to the Flash core.

Related

Cropping YUV_420_888 images for Firebase barcode decoding

I'm using the Firebase-ML barcode decoder in a streaming (live) fashion using the Camera2 API. The way I do it is to set up an ImageReader that periodically gives me Images. The full image is the resolution of my camera, so it's big - it's a 12MP camera.
The barcode scanner takes about 0.41 seconds to process an image on a Samsung S7 Edge, so I set up the ImageReaderListener to decode one at a time and throw away any subsequent frames until the decoder is complete.
The image format I'm using is YUV_420_888 because that's what the documentation recommends, and because if you try to feed the ML Barcode decoder anything else it complains (run time message to debug log).
All this is working but I think if I could crop the image it would work better. I'd like to leave the camera resolution the same (so that I can display a wide SurfaceView to help the user align his camera to the barcode) but I want to give Firebase a cropped version (basically a center rectangle). By "work better" I mean mostly "faster" but I'd also like to eliminate distractions (especially other barcodes that might be on the edge of the image).
This got me trying to figure out the best way to crop a YUV image, and I was surprised to find very little help. Most of the examples I have found online do a multi step process where you first convert the YUV image into a JPEG, then render the JPEG into a Bitmap, then scale that. This has a couple of problems in my mind
It seems like that would have significant performance implications (in real time). This would help me accomplish a few things including reducing some power consumption, improving the response time, and allowing me to return Images to the ImageReader via image.close() more quickly.
This approach doesn't get you back to Image, so you have to feed firebase a Bitmap instead, and that doesn't seem to work as well. I don't know what firebase is doing internally but I kind of suspect it's working mostly (maybe entirely) off of the Y plane and that the translation if Image -> JPEG -> Bitmap muddies that up.
I've looked around for YUV libraries that might help. There is something in the wild called libyuv-android but it doesn't work exactly in the format firebase-ml wants, and it's a bunch of JNI which gives me cross-platform concerns.
I'm wondering if anybody else has thought about this and come up with a better solution for croppying YUV_420_488 images in Android. Am I not able to find this because it's a relatively trivial operation? There's stride and padding to be concerned with among other things. I'm not an image/color expert, and i kind of feel like I shouldn't attempt this myself, my particular concern being I figure out something that works on my device but not others.
Update: this may actually be kind of moot. As an experiment I looked at the Image that comes back from ImageReader. It's an instance of ImageReader.SurfaceImage which is a private (to ImageReader) class. It also has a bunch of native tie-ins. So it's possible that the only choice is to do the compress/decompress method, which seems lame. The only other thing I can think of is to make the decision myself to only use the Y plane and make a bitmap from that, and see if Firebase-ML is OK with that. That approach still seems risky to me.
I tried to scale down the YUV_420_888 output image today. I think it quite similar to the cropping image.
In my case, I will put the 3 byte arrays from the Image.plane for representing the Y,U and V.
yBytes = Image.plane[0]
uBytes = Image.plane[1]
vBytes = Image.plane[2]
Then I convert it to the RGB array for bitmap converting. I found that if I read the YUV array with the original Image width and height by step 2 Then I can scale half of my Bitmap Image.
What should you prepare:
yBytes,
uBytes,
vBytes,
width of Image,
height of Image,
y row stride,
uv RowStride,
uv pixelStride,
output array (The length of output array length should be equal to the output image width * height. For me the size is 1/4 original width and height)
It means if you can find the cropping area positions(The four corner of the Image) in your image, you can just fill the new RGB array with the YUV data.
Hope it can help you to solve the problem.

High Resolution Capture and Encoding

I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.

How to Save an Image of a Large Flex Component (EX: 25000px by 3000px # 72dpi)

My application consists of displaying a large custom tree like structure to the user that can eventually grow to massive proportions like the dimensions listed in the question. I allow them to export the image with the following line of code tied to a button click event:
var image:ImageSnapshot = ImageSnapshot.captureImage(this, 72, new PNGEncoder(), false);
I've managed to export images close to the dimensions listed but around there it start to get the error message listed below after spinning for close to 15 seconds:
Error: Error #1000: The system is out of memory.
at flash.utils::ByteArray/writeBytes()
at mx.graphics::ImageSnapshot$/mergePixelRows()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:511]
at mx.graphics::ImageSnapshot$/captureAll()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:482]
at mx.graphics::ImageSnapshot$/captureImage()[E:\dev\4.x\frameworks\projects\framework\src\mx\graphics\ImageSnapshot.as:318]
at vertical/saveChart()[C:\devel\workspace\vertical\src\CustomObject.mxml:501]
at vertical/__saveImageBtn_click()[C:\devel\workspace\vertical\src\CustomObject.mxml:574]
Is the flashplayer plugin for my browser running out of memory? I noticed in my task manager it got up to about 1.2GB of memory usage(I have 4GB on my system). If that is the case is it possible to limit the memory usage for a given function like the ImageSnapshot.captureImage() call above?
Is there maybe a way to generate the component into 2 or 4 ImageSnapshot objects and piece them together afterward?
Any advice would be greatly appreciated.
I believe the latest Flash Player 11 has a new feature to solve this issue:
"Enhanced high resolution bitmap support — BitmapData objects are no longer limited to a maximum resolution of 16 megapixels (16,777,215 pixels), and maximum bitmap width/height is no longer limited to 8,191 pixels, enabling the development of apps that utilize very large bitmaps." from this PDF
If you are using BitmapData, it makes a difference which FlashPlayer you are targetting:
versions VS maximum bitmapsize
flashplayer -9 : 2880x2880 px
flashplayer 10 : 4096x4096 px
flashplayer 11 : unlimited
I don't know what you exactly are trying to do with this huge capture, but I would recommend using tiles. Break it down to chunks of relative small bitmaps. Create them separately, so you don't have to open/create that huge amount of data in your memory.
Anyway, it would be nice to know if it is possible to encode that big-ass sized image, without Error #1000 out of memory errors.

Loading images in Flex cause memory to go way up, in Internet Explorer 7 (& other browsers)

Loading images into Flex (size < 100kb) causes IE7 memory increase by a megabyte per image. What's going on here? Here is the code I have -- this for each image:
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_Error, Retry);//retry it
loader.contentLoaderInfo.addEventListener(Event.COMPLETE,
function(event:Event):void
{
var bitmap:Bitmap = (e.target.loader as Loader).content as Bitmap;
// save it & Load next
});
loader.load(new URLRequst( imageURL ));
This also happens in Chrome (2.0.172.33) & Firefox (3.0.10). How can reduce the memory usage?
Thanks!
I don't think that an increase of about 1 Mb per image is something to worry about necessarily. You point out that the images are less than 100 Kb, but you're probably looking at the wrong number: for example, a 640x480 jpg that I've just been sent takes ~48 Kb, but if you do the math, the raw image takes up 900 Kb (640 * 480 * 3 = 921,600). And if you use transparency, multiply by 4 instead of 3. The thing is that the player has to unpack the image in order to manipulate it. Storing the raw bytes alone for such an image can take up 1 meg or more (depending on its size).
Rather than focusing in reducing the memory usage per image (which is a rough estimate anyway), you're probably better off checking that you're cleaning up after yourself when you're done with the images. Failing to do that could lead to more serious problems. I agree with rhtx that Flex Builder's Profiler is a good tool for detecting leaks, if that's available to you. A simple test, in this case, could be loading an image, taking a memory snapshot, unloading the image, forcing a GC, taking a second snapshot and comparing it to the first one.
Are there any browsers or browser/OS combinations where you don't experience this issue?
I've worked through a couple of difficult memory issues and I've found that the Profiler is more than worth the extra $500 for the Flex Pro license, if that is an option for you.
I don't know why you're seeing a 1M jump each time you load an image, but I do know that the browser requests chunks of memory from the OS when it sees that it needs additional memory - sort of like buying in bulk. So, it makes some sense that the memory increase would be greater than necessary.
I may be (am probably) way off on this, but the anonymous function being used to handle the complete event feels like it could be causing a memory leak for you. My thought is that the contentLoader can not be removed and contains a copy of the image's bytearray. This is not a researched theory - if I have time later today, or tomorrow, I'll see if I can do some and back this up (or correct myself).
Try tracing through your function in the Event.COMPLETE event listener to make sure its hitting it exactly when you expect it to.
Might be better to just store the imageURL in an ArrayCollection then reference that by binding to an image itself... for example
<mx:Image source="{myAC.getItemAt(6).imagePath}" ...
or var tmpImage:Image = new Image();
tmpImage.source = myAC.getItemAt(i).imagePath;

Best way to show image sequence as a movie in Adobe AIR

I need to show an image sequence as a movie in an Adobe AIR application - i.e. treat lots of images as video frames and show the result. For now I am going to try simply loading them and displaying in a movie clip but this might be too slow. Any advanced ideas how to make it work? Images are located on a hard drive or very fast network share, so the bandwidth should be enough. There can be thousands of them, so preloading everything to memory doesn't seem feasible.
Adobe AIR is not 100% decided, I am open to other ideas how to create a cross-platform desktop application for this purpose quickly enough.
You could have an image control as your movie frame, then load up a buffer of BitmapData objects. Fill the BitmapData objects with the images as they come in, and then call the image load function to load the next image in the buffer.
private drawNextImage(bitmapData:BitmapData):void {
movieFrame.load(new Bitmap(bitmmapData));
}
In case the images aren't big but you have a lots of them it can be interesting to group sequences on single bitmaps (à la mipmap). This way you can load in say, one bitmap containing say, 50 images forming 2 seconds of video playback at 25 fps.
This is method is specially useful online as you want to limit the amount of pings and handshakes causing slowness but I reckon it can also be useful in order to optimize loading, unloading and memory access.

Resources