QR codes - same URL - different image - why? - qr-code

Why do some QR codes look different when using the same URL?

There are 40 Versions (sizes) of QR Codes, 4 error correction levels and 8 masking possibilities giving a total of 1280 possible QR codes for any given input.
Typically the version is chosen based on the amount of data to be stored and the mask is chosen to produce the best image in terms of readability. The error correction level is chosen by the encoder based on how much data might need to be recovered...

Choosing a different error correction level will result in a different image. The higher the level, better the chances it can recover from unreadable data.
http://en.wikipedia.org/wiki/QR_code#Storage

Related

how do i convert my image data to a format similar to the fashion-MNIST data

I'm new to Machine Learning so please bear with me with my novice question. I'm trying to train a model to recognize benthic foraminifera based on their detailed taxonomy... here is a sample of what foraminifera look like
I've been successful in doing this simply by loading my data using flow_images_from_directory (). However, i don't know how to explore the structure of the object usually generated by flow_images_from_directory. I will like to format my data-set similar to the structure of the Fashion MNIST data. So that it easy to us the modification of the code below. I have some experience with magick package
dataset_fashion_mnist()
c(train_images, train_labels) %<-% fashion_mnist$train
c(test_images, test_labels) %<-% fashion_mnist$test
so that i have something like set which would make it easier for me to understand especially the labeling part. Also, if possible, i want to be able to append other information from CSV file to the data-set. My data is already arranged in folders and sub-folders as follows
data/
train/
ammonia/ ### 102 pictures
ammonia001.tif
ammonia002.tif
...
elphidium/ ### 1024 pictures
elphidium001.jpg
elphidium002.jpg
...
test/
ammonia/ ### 16 pictures
ammonia001.jpg
ammonia002.jpg
...
elphidium/ ### 6 pictures
elphidium.jpg
elphidium.jpg
...
Any help or guide to materials will be highly appreciated.
I'll describe the steps you would go through on a high level.
Assuming you now have a training and testing set, both with all your classes reasonably balanced
load your images and extract the pixel values, normalize the values so they are between 0 and 1
if the images are of different sizes, you should pad them so they are all of the same size
if you are not using a method requiring 2D structure such as a CNN, you should also flatten the pixel values
Associate your images (in pixel form) with your class labels
Now you have a set of images of fixed size in pixel form with their associated class labels. You can then feed this into whatever model you are using
Hope this helps, let me know if you're confused by any part
Side note: from your sample, it looks like your dataset is heavily skewed - lots of elphidium examples but not a lot of ammonia examples. This will probably lead to problems later on. In general, you want a balanced number of examples between your classes.

SOM Data preperation

Good day.
I am 3 month old in R and R-Studio but am getting the hang of things. I am implementing a SOM solution with 38k records/observations using Kohonen SuperSOM following Self-Organising Maps for Customer Segmentation using R.
My data have no missing values but almost 60 columns many of them are dummyVars (I received this data in this format)
I have removed the ONE char Column (URL)
My Y column (as I understand it) is "shares" (How many times it was shared)
My data only consist of numerical data (dummyVars are of course 1 or 0)
I have Centered and Scaled my data (entire dataFrame)
As per the example I followed I dod convert the entire DF to a matrix
My problem is that my SOM takes ages to train even with multi core processing and my progress graph does not reach a nice flat"ish" plateau, it does come nicely down but still is very erratic, all my other graphs are extremely high in population and there are no nice clustering. I have even tried a 500 iteration with a 100x100 grid ;-(
I think /guess it is because of the huge amount of columns including mostly dummyVars e.g. dayOfWeek.Monday, dayOfWeek.Tuesday, category.LifeStile, category.Computers, etc.
What am I to do?
Should I convert the dummyVars back into another format, How and Why?
Please do not just give me a section of code as I would like to understand why I need to do What.
Thanx

What is the meaning of RescaleType = 'LOG_E REL' in a DICOM file?

I am trying to figure out the meaning of RescaleType = 'LOG_E REL' in a DICOM file. To be more specific, I need to know how to process the raw pixel values to get the image displayed in a proper way. Up to now I have only seen files with RescaleType = 'P-VALUES', which seemed to be correctly processed when applying formula:
pixVal = rescaleIntercept + rescaleSlope * pixRaw.
What would be the rescaling formula to apply when RescaleType = 'LOG_E REL'?
I am not sure if this value is part of the Dicom standard or it is just a specific value for a given manufacturer.
I am telling that because I only have seen these values for the images generated by an old (currently out of service) Agfa ADC compact CR
In the documentation you can read this:
LOG_E REL: pixel values are linearly related to the Log Exposure on
the image plate; the maximum pixel value corresponds to a delta LogE
of 3.2767 above the LogE for the minimum pixel value; in this case, a
VOI module (sequenced item) is present, also containing a lookup
table. Only 12 bit is supported.
I do not know if you should apply a rescaling formula or this is just a note related to some kind of postprocessing algorithm having been applied to the original image.
I assume you should just apply the given VOI LUT instead of trying to apply a rescale equation.
It would help if you could share an example of such dataset. In any way, I believe this is a Type 3 tag in which case, the information is not really required. Just apply the Rescale Slope/Intercept as usual and see what it does.

How to make two QR code rendered same size

There's two QR code in the page reference several dynamic input data, and data contains number, alphanumeric and Chinese(UTF8), and these two QR codes with same module width and error correction level(M), if data is below
QR1 = 0000|ABC|def|中文|
QR2 = aaa#bbb.com| |XYZ
does any idea to make QR1 and QR2 will be rendered almost same size?
I try to make data of QR1 and QR2 with same length by appending space but no work :(
thanks
I made those QR Codes at www.unitaglive.com/qrcode.
They have the same quantity of columns (that's is called the Version http://www.qrcode.com/en/about/version.html).
What would you like to do exactly?

Calculating the number of needed error correction words in QR code

I like to encode QR codes. Therefore I need to know, how much error correction words are needed by an specified version and correction level.
For QR version 1 in combination with ec-level Q there must be 13 error correction words and 13 data words.
I know there are some tables (table 7,8,9) in the ISO/IEC 18004 where this information is stored. But I like to know if it's possible to calculate the amount of needed error correction words.
Greets,
Raffi
Yes, you need ISO 18004. I suppose you could also look at the source code from zxing that calculates this. It happens around the method interleaveWithECBytes.

Resources