Image file URI vs Image Bitmap - android-image

I'm using ML Kit to label images with a custom model on Android. Which InputImage is better to pass into the model that can provide more accurate results? Bitmap or image file URI?

There should be no difference in results accuracies between those 2 if image is converted correctly to bitmap. However, if you need to use bitmap in your app anyhow, passing bitmap to MLKit will make the latency smaller as creating MLKit InputImage from file path convert it to a bitmap internally.

Related

how convert images into uint8 without loss of information, to export them as video in Google Earth Engine?

I have an Image collection, I want to export them as a video file to my drive. Documentation says I need to convert my Images into uint8 RGB format for this matter doc link. problem is when I convert them to uint8 almost all values get clipped into 255, hence the whole scene is flat bright white!
it is of course impossible to do it lossless but, how should I do this without much loss of information so the video looks good at least?
this works but not very good quality obviously:
var to_uint8 = function(img){
return img.toDouble().divide(3000).multiply(255).uint8();
};
my_collection = my_collection.map(to_uint8);

Cropping YUV_420_888 images for Firebase barcode decoding

I'm using the Firebase-ML barcode decoder in a streaming (live) fashion using the Camera2 API. The way I do it is to set up an ImageReader that periodically gives me Images. The full image is the resolution of my camera, so it's big - it's a 12MP camera.
The barcode scanner takes about 0.41 seconds to process an image on a Samsung S7 Edge, so I set up the ImageReaderListener to decode one at a time and throw away any subsequent frames until the decoder is complete.
The image format I'm using is YUV_420_888 because that's what the documentation recommends, and because if you try to feed the ML Barcode decoder anything else it complains (run time message to debug log).
All this is working but I think if I could crop the image it would work better. I'd like to leave the camera resolution the same (so that I can display a wide SurfaceView to help the user align his camera to the barcode) but I want to give Firebase a cropped version (basically a center rectangle). By "work better" I mean mostly "faster" but I'd also like to eliminate distractions (especially other barcodes that might be on the edge of the image).
This got me trying to figure out the best way to crop a YUV image, and I was surprised to find very little help. Most of the examples I have found online do a multi step process where you first convert the YUV image into a JPEG, then render the JPEG into a Bitmap, then scale that. This has a couple of problems in my mind
It seems like that would have significant performance implications (in real time). This would help me accomplish a few things including reducing some power consumption, improving the response time, and allowing me to return Images to the ImageReader via image.close() more quickly.
This approach doesn't get you back to Image, so you have to feed firebase a Bitmap instead, and that doesn't seem to work as well. I don't know what firebase is doing internally but I kind of suspect it's working mostly (maybe entirely) off of the Y plane and that the translation if Image -> JPEG -> Bitmap muddies that up.
I've looked around for YUV libraries that might help. There is something in the wild called libyuv-android but it doesn't work exactly in the format firebase-ml wants, and it's a bunch of JNI which gives me cross-platform concerns.
I'm wondering if anybody else has thought about this and come up with a better solution for croppying YUV_420_488 images in Android. Am I not able to find this because it's a relatively trivial operation? There's stride and padding to be concerned with among other things. I'm not an image/color expert, and i kind of feel like I shouldn't attempt this myself, my particular concern being I figure out something that works on my device but not others.
Update: this may actually be kind of moot. As an experiment I looked at the Image that comes back from ImageReader. It's an instance of ImageReader.SurfaceImage which is a private (to ImageReader) class. It also has a bunch of native tie-ins. So it's possible that the only choice is to do the compress/decompress method, which seems lame. The only other thing I can think of is to make the decision myself to only use the Y plane and make a bitmap from that, and see if Firebase-ML is OK with that. That approach still seems risky to me.
I tried to scale down the YUV_420_888 output image today. I think it quite similar to the cropping image.
In my case, I will put the 3 byte arrays from the Image.plane for representing the Y,U and V.
yBytes = Image.plane[0]
uBytes = Image.plane[1]
vBytes = Image.plane[2]
Then I convert it to the RGB array for bitmap converting. I found that if I read the YUV array with the original Image width and height by step 2 Then I can scale half of my Bitmap Image.
What should you prepare:
yBytes,
uBytes,
vBytes,
width of Image,
height of Image,
y row stride,
uv RowStride,
uv pixelStride,
output array (The length of output array length should be equal to the output image width * height. For me the size is 1/4 original width and height)
It means if you can find the cropping area positions(The four corner of the Image) in your image, you can just fill the new RGB array with the YUV data.
Hope it can help you to solve the problem.

WKInterfaceDevice caching optimization

I am trying to cache rendered animations to the apple watch (these are generated at run time). I have saved the frames of each animation as JPEG #1x with compression of 0.1. The sum of all the frames is less then 1.2 MB. I clear the cache before I start caching. However only about half the animations are cached. The documentation says that the cache is 5MB. What am I doing wrong?
If you want to send image data to the Watch programmatically (i.e. not at compile time), WKInterfaceDevice provides two methods:
addCachedImage:name: accepts a UIImage, encodes it as PNG image data, and transmits it to the cache. So, if you create a UIImage from JPEG data, you are actually decoding the JPEG data into an image, then re-encoding it as PNG before it's sent to the cache (thereby negating the effects of JPEG-encoding in the first place).
addCachedImageWithData:name: accepts NSData and transmits the unaltered data directly to the cache. So, if you encode your image to NSData using UIImageJpegRepresentation and pass it to this method, you'll transmit and store less in the cache. I use this technique for all of my images, unless I need the benefits of a PNG image; in that case, I actually encode my own NSData using UIImagePngRepresentation and send it using this method.
For debugging purposes, it's helpful to use the [[WKInterfaceDevice currentDevice] cachedImages] dictionary to find the size of the cached image data. The dictionary returns a NSNumber with the size (in bytes) of the cache entry.
I just discovered that if you use this line of code:
[self.image setImageNamed:#"number"]
Your images should be named:
number1.png
number2.png
number3.png
number4.png
I was running into a similar error when I had my images named:
number001.png
number002.png
number003.png
number004.png

IMediaSample(DirectShow) to IDirect3DSurface9/IMFSample(MediaFoundation)

I am working on a custom video player. I am using a mix of DirectShow/Media Foundation in my architecture. Basically, I'm using DS to grab VOB frames(unsupported by MF). I am able to get a sample from DirectShow but am stuck on passing it to the renderer. In MF, I get a Direct3DSurface9 (from IMFSample), and present it on the backbuffer using the IDirect3D9Device.
Using DirectShow, I'm getting IMediaSample as my data buffer object. I don't know how to convert and pass this as IMFSample. I found others getting bitmap info from the sample and use GDI+ to render. But my video data may not always have RGB data. I wish to get a IDirect3DSurface9 or maybe IMFSample from IMediaSample and pass it for rendering, where I will not have to bother about color space conversion.
I'm new to this. Please correct me if I'm going wrong.
Thanks
IMediaSample you have from upstream decoder in DirectShow is nothing but a wrapper over memory backed buffer. There is no and cannot be any D3D surface behind it (unless you take care of it on your own and provide a custom allocator, in which case you would not have a question in first place). Hence, you are to memory-copy data from this buffer into MF sample buffer.
There you come to the question that you want buffer formats (media types) match, so that you could copy without conversion. One of the ways - and there might be perhaps a few - is to first establish MF pipeline and find out what exactly pixel type you are offered with buffers on the video hardware. Then make sure you have this pixel format and media type in DirectShow pipeline, by using respective grabber initialization or color space conversion filters, or via color space conversion DMO/MFT.

Flex - Save image with higher dpi

In Flex, I am using graphics.codec.JPEGEncoder to save image files that are edited inside application (normal manipulations like brightness etc.) I am able to save files perfectly. What I want to know is that is there any way I can save the image with a better dpi? Say, for instance the image that is loaded and manipulated was originally of 72dpi, now can I save it with a dpi of 150 or 300 ? If so, how to do it.
Doesn't have to be using the JPEGEncoder, if there's any way to do it at all, like using any library etc, I am okay with it. Any suggestions?
Note: If it matters, I am using Bitmapdata to store the image and manipulations and saving the image with JPEGEncoder by supplying it's data as bytearray like below.
var imageBytes:ByteArray = encoder.encode(myBitmapData);
If you say you want to save a 72 dpi image as 150 or 300 dpi image, then it should be in your case essentially an enlargement and you have to use something like bicubic interpolation for this.

Resources