In Flex, I am using graphics.codec.JPEGEncoder to save image files that are edited inside application (normal manipulations like brightness etc.) I am able to save files perfectly. What I want to know is that is there any way I can save the image with a better dpi? Say, for instance the image that is loaded and manipulated was originally of 72dpi, now can I save it with a dpi of 150 or 300 ? If so, how to do it.
Doesn't have to be using the JPEGEncoder, if there's any way to do it at all, like using any library etc, I am okay with it. Any suggestions?
Note: If it matters, I am using Bitmapdata to store the image and manipulations and saving the image with JPEGEncoder by supplying it's data as bytearray like below.
var imageBytes:ByteArray = encoder.encode(myBitmapData);
If you say you want to save a 72 dpi image as 150 or 300 dpi image, then it should be in your case essentially an enlargement and you have to use something like bicubic interpolation for this.
Related
I want to make a app that record video using webcam,
I made the logic like get each frame as bitmap and store it to file using
AForge VideoFileWriter WriteVideoFrame function,
When I open then file using VideoFileWriter Open function,
writer.Open(path, VideoWidth, VideoHeight, frameRate, VideoCodec.H264, bitRate);
It is hard to determine the bitRate, when the bitrate is wrong, the whole program die without any error.
I think the bitrate is related to video frame width, height, framerate, bitcount as well as codec,
But I not sure the specific formular to calculate.
I want to compress the video using h264 codec.
Can anyone help me to find out the solution?
Thank you very much.
I'm using ML Kit to label images with a custom model on Android. Which InputImage is better to pass into the model that can provide more accurate results? Bitmap or image file URI?
There should be no difference in results accuracies between those 2 if image is converted correctly to bitmap. However, if you need to use bitmap in your app anyhow, passing bitmap to MLKit will make the latency smaller as creating MLKit InputImage from file path convert it to a bitmap internally.
I'm using the Firebase-ML barcode decoder in a streaming (live) fashion using the Camera2 API. The way I do it is to set up an ImageReader that periodically gives me Images. The full image is the resolution of my camera, so it's big - it's a 12MP camera.
The barcode scanner takes about 0.41 seconds to process an image on a Samsung S7 Edge, so I set up the ImageReaderListener to decode one at a time and throw away any subsequent frames until the decoder is complete.
The image format I'm using is YUV_420_888 because that's what the documentation recommends, and because if you try to feed the ML Barcode decoder anything else it complains (run time message to debug log).
All this is working but I think if I could crop the image it would work better. I'd like to leave the camera resolution the same (so that I can display a wide SurfaceView to help the user align his camera to the barcode) but I want to give Firebase a cropped version (basically a center rectangle). By "work better" I mean mostly "faster" but I'd also like to eliminate distractions (especially other barcodes that might be on the edge of the image).
This got me trying to figure out the best way to crop a YUV image, and I was surprised to find very little help. Most of the examples I have found online do a multi step process where you first convert the YUV image into a JPEG, then render the JPEG into a Bitmap, then scale that. This has a couple of problems in my mind
It seems like that would have significant performance implications (in real time). This would help me accomplish a few things including reducing some power consumption, improving the response time, and allowing me to return Images to the ImageReader via image.close() more quickly.
This approach doesn't get you back to Image, so you have to feed firebase a Bitmap instead, and that doesn't seem to work as well. I don't know what firebase is doing internally but I kind of suspect it's working mostly (maybe entirely) off of the Y plane and that the translation if Image -> JPEG -> Bitmap muddies that up.
I've looked around for YUV libraries that might help. There is something in the wild called libyuv-android but it doesn't work exactly in the format firebase-ml wants, and it's a bunch of JNI which gives me cross-platform concerns.
I'm wondering if anybody else has thought about this and come up with a better solution for croppying YUV_420_488 images in Android. Am I not able to find this because it's a relatively trivial operation? There's stride and padding to be concerned with among other things. I'm not an image/color expert, and i kind of feel like I shouldn't attempt this myself, my particular concern being I figure out something that works on my device but not others.
Update: this may actually be kind of moot. As an experiment I looked at the Image that comes back from ImageReader. It's an instance of ImageReader.SurfaceImage which is a private (to ImageReader) class. It also has a bunch of native tie-ins. So it's possible that the only choice is to do the compress/decompress method, which seems lame. The only other thing I can think of is to make the decision myself to only use the Y plane and make a bitmap from that, and see if Firebase-ML is OK with that. That approach still seems risky to me.
I tried to scale down the YUV_420_888 output image today. I think it quite similar to the cropping image.
In my case, I will put the 3 byte arrays from the Image.plane for representing the Y,U and V.
yBytes = Image.plane[0]
uBytes = Image.plane[1]
vBytes = Image.plane[2]
Then I convert it to the RGB array for bitmap converting. I found that if I read the YUV array with the original Image width and height by step 2 Then I can scale half of my Bitmap Image.
What should you prepare:
yBytes,
uBytes,
vBytes,
width of Image,
height of Image,
y row stride,
uv RowStride,
uv pixelStride,
output array (The length of output array length should be equal to the output image width * height. For me the size is 1/4 original width and height)
It means if you can find the cropping area positions(The four corner of the Image) in your image, you can just fill the new RGB array with the YUV data.
Hope it can help you to solve the problem.
I am trying to save out a large image from flash using bitmapdata and the jpegencoder. I am looking into the limitations of this process and have noticed you can only set bitmapdata pizel width and height to a certain amount and this might be flexible with what you set the jpegencoder quality to (1-100).
Does anyone know what the specific limitations of these two things are? I'm basically trying to see just how large of an image I can save out (because I need to use the image exported for printing purposes, so I need it as high quality as possible).
I have read articles that say in fp 10 you can render up to something like 16,000 px. But I tried an image that is 3500 x 3500 and it timed out. So not sure if this is correct information.
The image size limit up to Flash Player 9 is 2880x2880, Flash 10 increased this limit to 4096x4096. This applies also for the Stage, Sprites and MovieClips.
The quality used for the JPGEncoder class does not circumvent this limitation as this is tied to the Flash core.
I need to show an image sequence as a movie in an Adobe AIR application - i.e. treat lots of images as video frames and show the result. For now I am going to try simply loading them and displaying in a movie clip but this might be too slow. Any advanced ideas how to make it work? Images are located on a hard drive or very fast network share, so the bandwidth should be enough. There can be thousands of them, so preloading everything to memory doesn't seem feasible.
Adobe AIR is not 100% decided, I am open to other ideas how to create a cross-platform desktop application for this purpose quickly enough.
You could have an image control as your movie frame, then load up a buffer of BitmapData objects. Fill the BitmapData objects with the images as they come in, and then call the image load function to load the next image in the buffer.
private drawNextImage(bitmapData:BitmapData):void {
movieFrame.load(new Bitmap(bitmmapData));
}
In case the images aren't big but you have a lots of them it can be interesting to group sequences on single bitmaps (à la mipmap). This way you can load in say, one bitmap containing say, 50 images forming 2 seconds of video playback at 25 fps.
This is method is specially useful online as you want to limit the amount of pings and handshakes causing slowness but I reckon it can also be useful in order to optimize loading, unloading and memory access.