Recover data from a QR code with the last 2 lines missing - qr-code

Bitcoin.
I have 250BTC on a qr code that i discover only now has the last 2 lines missing.
If my math is correct, 2 lines (width is around 25 pixels, so 2 lines = 50 boxes that can be only black or white)
2^50 = 10^15 combinations.
The qr code produces a 30-character hash, i have the first 13 characters of the hash.
Is there any way you suggest me to try to recover the money?

Some of the last two lines are part of the finder pattern at the bottom-left, which has no information (you can easily draw it back in). It's surrounded by a white gutter of 1 module, and the next column over (moving right) is part of the format information section. This is error-correctable itself, but, is also replicated at the top right. You won't need this bit.
The rest is indeed information in a v2 code. You're missing only 16*2 = 32 bits, or 4 codewords. The minimal error correction for a QR code, level L, has 10 EC codewords. It can correct 10 errors. Just leave the area white, and all the codewords will be errors, but that's easy to correct with room to spare by any decoder.
Just draw back in the finder pattern.

Related

Dicom protocol and CT scan and sorting according to z position of axial slices?

I have a CT scan of the chest where I can't seem to figure out how to determine how to sort the aixal slices such that the first slices are the ones closet to the head.
The scan has the following dicom parameters:
Patient Position Attribute (0018,5100): FFS (Feet First Supine)
Image Position (Patient) (0020,0032): -174-184-15 (one slice)
Image Orientation (Patient)(0020,0037): 1\0\0\0\1\0
The most cranial slice (anatomically, closet to the head) has z position 13 and the most caudal (lower) -188.
However, when the Patient Position is FFS shouldn't the slice with the lowest z position (e.g. -188) be the one being the one most cranially (anatomically, i.e. closet to the head) located?
Can anyone enlighten me?
Kind regards
DICOM very clearly defines, that the x-axis has to go from the patients right to the left, the y-axis has to go from front to back, and the z-axis has to go from foot to head.
So the lower z-position of -188 has to be closer to the feet than the higher position of 13. You should always rely on this.
Patient Position Attribute is rather some informational annotation. If you do all the math yourself, then you can ignore it.
If a viewer does not do the math (there are a lot of them) and just loads the images and shows them sored by ImageNumber, then the Position Attribute is the info to annotate, if the image with ImageNumber 1 is the one closer to the head or the one closer to the feet. Meaning: when the patient went throught the ct scanner, which one was the first imate aquisitioned: the one of the head or of the feet.

Handle "Division by Zero" in Image Processing (or PRNU estimation)

I have the following equation, which I try to implement. The upcoming question is not necessarily about this equation, but more generally, on how to deal with divisions by zero in image processing:
Here, I is an image, W is the difference between the image and its denoised version (so, W expresses the noise in the image), and K is an estimated fingerprint, gained from d images of the same camera. All calculations are done pixel-wise; so the equations does not involve a matrix multiplication. For more on the Idea of estimating digital fingerprints consult corresponding literature like the general wikipedia article or scientific papers.
However my problem arises when an Image has a pixel with value Zero, e.g. perfect black (let's say we only have one image, k=1, so the Zero gets not overwritten by the pixel value of the next image by chance, if the next pixelvalue is unequal Zero). Then I have a division by zero, which apparently is not defined.
How can I overcome this problem? One option I came up with was adding +1 to all pixels right before I even start the calculations. However this shifts the range of pixel values from [0|255] to [1|256], which then makes it impossible to work with data type uint8.
Other authors in papers I read on this topic, often do not consider values close the range borders. For example they only calculate the equation for pixelvalues [5|250]. They reason this, not because of the numerical problem but they say, if an image is totally saturated, or totally black, the fingerprint can not even be estimated properly in that area.
But again, my main concern is not about how this algorithm performs best, but rather in general: How to deal with divisions by 0 in image processing?
One solution is to use subtraction instead of division; however subtraction is not scale invariant it is translation invariant.
[e.g. the ratio will always be a normalized value between 0 and 1 ; and if it exceeds 1 you can reverse it; you can have the same normalization in subtraction but you need to find the max values attained by the variables]
Eventualy you will have to deal with division. Dividing a black image with itself is a proper subject - you can translate the values to some other range then transform back.
However 5/8 is not the same as 55/58. So you can take this only in a relativistic way. If you want to know the exact ratios you better stick with the original interval - and handle those as special cases. e.g if denom==0 do something with it; if num==0 and denom==0 0/0 that means we have an identity - it is exactly as if we had 1/1.
In PRNU and Fingerprint estimation, if you check the matlab implementation in Jessica Fridrich's webpage, they basically create a mask to get rid of saturated and low intensity pixels as you mentioned. Then they convert Image matrix to single(I) which makes the image 32 bit floating point. Add 1 to the image and divide.
To your general question, in image processing, I like to create mask and add one to only zero valued pixel values.
img=imread('my gray img');
a_mat=rand(size(img));
mask=uint8(img==0);
div= a_mat/(img+mask);
This will prevent division by zero error. (Not tested but it should work)

Trying to make total color of image, not good at programming

I'm working on a spectroscopy project though its taking way too long. I haven't found any program online that can do what I need and I don't know how to do it myself. What I need is a program that takes an image(from the hard drive) and adds the rgb values in DEC out of 255 of every pixel, returned individually in red, green, and blue. Additionally, although I can do this part on my own, the values then need to be multiplied by 255 divided by the greatest value and converted to HEX as to retrieve a kinda total color of the entire image. (NOTE: I do not want an average color of the image, I tried that and it only returns neutral colors)
By googling for 2 minutes I found this:
https://itg.beckman.illinois.edu/technology_development/software_development/get_rgb/
No idea if it works but it looks like it does what you want.

what kind of qr-like code is divided into 4 quarters

I was wondering what kind of QR code does not sport squares in the corners and is divided into 4 quarters by a solid black line? I would like to replicate this, since I think they look more professional than the variety I have seen before, but I cannot find out what kind of code it would be?
It's a data matrix 2D barcode.
http://en.wikipedia.org/wiki/Data_Matrix
http://en.wikipedia.org/wiki/Barcode
It only looks divided into 4 blocks when you get above a certain size (such as 20x20 shown below)
http://jpgraph.net/download/manuals/chunkhtml/images/datamatrix-structure-details.png
This article talks more about these blocks (or more officially, 'Data Regions' or 'sub matrices'). See page 12 for a table of common sizes and data region breakdowns:
http://www.gs1.org/docs/barcodes/GS1_DataMatrix_Introduction_and_technical_overview.pdf
The matrix symbol (square or rectangle) will be composed of several
areas of data (or: Data Regions), which together encode the data.

OCR and character similarity

I am currently working on some kind of OCR (Optical Character Recognition) system. I have already written a script to extract each character from the text and clean (most of the) irregularities out of it. I also know the font. The images I have now for example are:
M (http://i.imgur.com/oRfSOsJ.png (font) and http://i.imgur.com/UDEJZyV.png (scanned))
K (http://i.imgur.com/PluXtDz.png (font) and http://i.imgur.com/TRuDXSx.png (scanned))
C (http://i.imgur.com/wggsX6M.png (font) and http://i.imgur.com/GF9vClh.png (scanned))
For all of these images I already have a sort of binary matrix (1 for black, 0 for white). I was now wondering if there was some kind of mathematical projection-like formula to see the similarity between these matrices. I do not want to rely on a library, because that was not the task given to me.
I know this question may seem a bit vague and there are similar questions, but I'm looking for the method, not for a package and so far I couldn't find any comments regarding the method. The reason this question being vague is that I really have no point to start. What I want to do is actually described here on wikipedia:
Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition".[9] This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly. (http://en.wikipedia.org/wiki/Optical_character_recognition#Character_recognition)
If anyone could help me out on this one, I would appreciate it very much.
for recognition or classification most OCR's use neural networks
These must be properly configured to desired task like number of layers internal interconnection architecture , and so on. Also problem with neural networks is that they must be properly trained which is pretty hard to do properly because you will need to know for that things like proper training dataset size (so it contains enough information and do not over-train it). If you do not have experience with neural networks do not go this way if you need to implement it yourself !!!
There are also other ways to compare patterns
vector approach
polygonize image (edges or border)
compare polygons similarity (surface area, perimeter, shape ,....)
pixel approach
You can compare images based on:
histogram
DFT/DCT spectral analysis
size
number of occupied pixels per each line
start position of occupied pixel in each line (from left)
end position of occupied pixel in each line (from right)
these 3 parameters can be done also for rows
points of interest list (points where is some change like intensity bump,edge,...)
You create feature list for each tested character and compare it to your font and then the closest match is your character. Also these feature list can be scaled to some fixed size (like 64x64) so the recognition became invariant on scaling.
Here is sample of features I use for OCR
In this case (the feature size is scaled to fit in NxN) so each character has 6 arrays by N numbers like:
int row_pixels[N]; // 1nd image
int lin_pixels[N]; // 2st image
int row_y0[N]; // 3th image green
int row_y1[N]; // 3th image red
int lin_x0[N]; // 4th image green
int lin_x1[N]; // 4th image red
Now: pre-compute all features for each character in your font and for each readed character. Find the most close match from font
min distance between all feature vectors/arrays
not exceeding some threshold difference
This is partially invariant on rotation and skew up to a point. I do OCR for filled characters so for outlined font it may have use some tweaking
[Notes]
For comparison you can use distance or correlation coefficient

Resources