Dimensions of a QR Code printed on a floor decal? - qr-code

I am looking to put a QR Code on a floor decal in a grocery store. We are going to encode the URL to a promotion micro-site.
What should the dimensions of the QR Code be for it to be read correctly?

I'm not seeing why there is a need for there to be any explicit dimensions; the required elements of a QR code are used to determine size, alignment, etc. As long as you're printing a actually valid QR code (border included, etc), the customer should be able to adjust the focal distance from the camera to the code.
If you're really so inclined, just hold an average smartphone at average-person height, and roughly determine the size from the field of view.

Related

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

Cropping YUV_420_888 images for Firebase barcode decoding

I'm using the Firebase-ML barcode decoder in a streaming (live) fashion using the Camera2 API. The way I do it is to set up an ImageReader that periodically gives me Images. The full image is the resolution of my camera, so it's big - it's a 12MP camera.
The barcode scanner takes about 0.41 seconds to process an image on a Samsung S7 Edge, so I set up the ImageReaderListener to decode one at a time and throw away any subsequent frames until the decoder is complete.
The image format I'm using is YUV_420_888 because that's what the documentation recommends, and because if you try to feed the ML Barcode decoder anything else it complains (run time message to debug log).
All this is working but I think if I could crop the image it would work better. I'd like to leave the camera resolution the same (so that I can display a wide SurfaceView to help the user align his camera to the barcode) but I want to give Firebase a cropped version (basically a center rectangle). By "work better" I mean mostly "faster" but I'd also like to eliminate distractions (especially other barcodes that might be on the edge of the image).
This got me trying to figure out the best way to crop a YUV image, and I was surprised to find very little help. Most of the examples I have found online do a multi step process where you first convert the YUV image into a JPEG, then render the JPEG into a Bitmap, then scale that. This has a couple of problems in my mind
It seems like that would have significant performance implications (in real time). This would help me accomplish a few things including reducing some power consumption, improving the response time, and allowing me to return Images to the ImageReader via image.close() more quickly.
This approach doesn't get you back to Image, so you have to feed firebase a Bitmap instead, and that doesn't seem to work as well. I don't know what firebase is doing internally but I kind of suspect it's working mostly (maybe entirely) off of the Y plane and that the translation if Image -> JPEG -> Bitmap muddies that up.
I've looked around for YUV libraries that might help. There is something in the wild called libyuv-android but it doesn't work exactly in the format firebase-ml wants, and it's a bunch of JNI which gives me cross-platform concerns.
I'm wondering if anybody else has thought about this and come up with a better solution for croppying YUV_420_488 images in Android. Am I not able to find this because it's a relatively trivial operation? There's stride and padding to be concerned with among other things. I'm not an image/color expert, and i kind of feel like I shouldn't attempt this myself, my particular concern being I figure out something that works on my device but not others.
Update: this may actually be kind of moot. As an experiment I looked at the Image that comes back from ImageReader. It's an instance of ImageReader.SurfaceImage which is a private (to ImageReader) class. It also has a bunch of native tie-ins. So it's possible that the only choice is to do the compress/decompress method, which seems lame. The only other thing I can think of is to make the decision myself to only use the Y plane and make a bitmap from that, and see if Firebase-ML is OK with that. That approach still seems risky to me.
I tried to scale down the YUV_420_888 output image today. I think it quite similar to the cropping image.
In my case, I will put the 3 byte arrays from the Image.plane for representing the Y,U and V.
yBytes = Image.plane[0]
uBytes = Image.plane[1]
vBytes = Image.plane[2]
Then I convert it to the RGB array for bitmap converting. I found that if I read the YUV array with the original Image width and height by step 2 Then I can scale half of my Bitmap Image.
What should you prepare:
yBytes,
uBytes,
vBytes,
width of Image,
height of Image,
y row stride,
uv RowStride,
uv pixelStride,
output array (The length of output array length should be equal to the output image width * height. For me the size is 1/4 original width and height)
It means if you can find the cropping area positions(The four corner of the Image) in your image, you can just fill the new RGB array with the YUV data.
Hope it can help you to solve the problem.

How to estimate a QR code redunduncy level?

I'm not a specialist, but as far as I know, a bit of information in a QR-code is coded more than once, and it is defined as the redundancy level
How can I estimate a QR-code redundancy level ? Is where an mobile app or a website where I can test my QR-code redundancy level easily ? If not, is it an easy algorithm that I can implement ?
Redundancy is sorted in different categories according to this website,
but I'd like to have the direct percentage value if possible
There are some pixels next to the lower left positioning block which indicate the redundancy level. Quote from https://blog.qrstuff.com/2011/12/14/qr-code-error-correction
Quite conveniently, there’s also 2 modules down in the bottom left-hand corner of every QR code that display what the error correction level used in that QR code is.
There is a very nice graphic on that page which visualizes this, which I won't include here as I assume that I'm not licensed to do so.

ITK-SNAP segmentation displays same intensity value even after registration

I'm using ITK-SNAP to compare the intensities of several Regions of Interest between several conditions.
For some subjects, I need to realign one image to another by using the Registration tool.
However, I noticed that the intensity values of a specific segmentation that I drew on the reference image doesn't change no matter how I register.
The value will be different between the two images, but even if I manually register the second image to something completely off, it will stay the same.
Is it possible to get the actual mean intensity of my segmentation depending on where it is on the registered image ?
Segmentation menu, option "Volumes and Statistics..." should show you what you are looking for.
Registration does not impact the intensity. Depending on how you transform your image, it affects the location and coordination of your voxels! It does not play with the intensities! It may reform, or reshape, rotate, or translate the image. If you expect different intensities after registration, you need to apply some other techniques rather than registration! because all the transformation matrix are applied on the coordination and location. You should play with the other features of your data!
There are some registration methods which influence the intensities but they are not used in ITKSNAP for example. You should look for its special package.
For example this paper is on:
Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes
Which is specifically playing with the intensities for fusion.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755
Other example is this matlab script for Intensity based automatic registration, The process begins with the transform type you specify and an internally determined transformation matrix. Together, they determine the specific image transformation that is applied to the moving image with bilinear interpolation.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755

How to decide if a DICOM series is a 3D volume or a series of images?

We are writing an importer for dicom files.
How does one generally deceide if a series of images forms a 3D-Volume or is just a series of 2D images?
Is there a universal way to decide this for most vendors? I looked a the DICOM tags and could no find an apparent solution.
The DICOM standard defines UIDs for describing the hierarchy. These are from top to bottom:
Study UID - Identifier of the study or scanning session.
Series UID - The same within a series acquired in one scan.
Image UID - Should be unique for any image.
A DICOM image saved by a standard-conforming implementation should have all these IDs. If multiple images have the same SeriesUID, they are a volume (or time-series) as defined in the standard. Some software of course is not standard-conforming and you'll have to look at other things like timestamps and patient position, but it is usually best to start by following the standard.
For ordering the series after identifying it, GDCM (as malat suggested) or dcmtkdicom are pretty well-established libraries.
In MR, you'll want to look for:
MR Acquisition Type (0018,0023). It has two enumerated values:
2D = frequency x phase
3D = frequency x phase x phase
I'm not as sure about CT.
Most of the time, malat's answer is what you'll want to do (i.e. organize the slices by position and orientation and treat them in a 3D fashion through multi-planar reconstruction).
I think what you are searching for is the algorithm to organise DICOM dataset using Image Position (Patient) and Image Orientation (Patient).
A typical implementation can be found in GDCM
Please note that my answer may be totally unrelated to your specific DICOM instances, but since you did not specified which SOP Class UID you were dealing with, I simply assumed you were dealing with old CT or MR Image Storage
Patient Position (0018, 5100) is a type 1 required attribute for both the CT and MR modalities. This attribute is VERY IMPORTANT for accurately interpreting the patient's orientation.
Projection radiograph typically will have Patient Orientation (0020, 0020) attribute and cross-sectional image should have Image Position (0020, 0032) and Image Orientation (0020, 0037) attributes as they are type 1 required element of Image Plane module (see PS 3.3 section C.7.6.2.1.1).
However, localizer or scout image included with CT study is not really a cross-sectional image but a projection image and may contain Image Position and Image Orientation attributes. So is the case of MR study where one or more sagittal or coronal images are usually captured from which axial images are prescribed. In this case different logic is needed to identify the localizer image. For example, CT localizer may use the string "LOCALIZER" for value 3 of "Image Type" attributes.
If someone haven't found the answer, I looked through the tags in RadiAnt DICOM viewer where I compared different files and the Scan Options (0018, 0022) tag I think which contains the information. If the tag exists (because on some it was not there) and the value is equal to HELICAL MODE or HELIX then a 3D image can be constructed from that.

Resources