Cannot control size, of printed QR code on EPSON TM88 V using MS Pos for .Net - qr-code

I am upgrading an existing retailer software to print a QR code on an EPSON TM 88 V using MS Pos for .Net 1.14.
The PrintBarcode function is straightforward to use for one dimensional barcode, like the Code93, and its size can be adjusted with the width and height parameters.
public abstract void PrintBarCode(PrinterStation station, string data, BarCodeSymbology symbology, int height, int width, int alignment, BarCodeTextPosition textPosition);
However, BarCodeSymbology QRCode (204) size does not seem to be adjustable with these height and width parameters.. The barcode prints fine, but is very tiny (about 5 mm width and height), regardless of the parameters value.
How can I adjust the size of the printed QR code?

The documentation for OPOS(EPSON OPOS ADK) rather than POS for.NET(EPSON OPOS ADK for.NET) has the following instructions:
Probably the same notes apply to POS for.NET.
Please try adjusting the Width parameter value to get the desired size.
3.6.2 Printing Size
Because the width and length of QR CODE are the same, printing is done to the inner part at a size closest to it by using the value specified by the Width parameter.
Therefore, the height of print is not affected by the Height parameter.
If the Height parameter is less than 0, an error occurs.
The print size is determined by the version of QR and the size of the module.
Because the version of QR is determined by the data length and type, you can use the size of the module to adjust the print size. If the two dimensional barcode cannot fit into the print area (depending on the paper width, layout settings, etc.) then OPOS_E_ILLEGAL is returned and at this moment ResultCodeExtended becomes zero.
For QR, it differs from other two dimensional barcodes; if the encoded data result is not known, then the print width cannot be obtained.
If the print width cannot be obtained, the page mode range for 90-degree rotated printing cannot be specified.
Therefore, within OPOS it calculates the number of code words of the encoded data.
Because of this reason, data amount can be correctly verified.
And here is a similar Japanese FAQ.
QR コードを印刷する方法<EPSON OPOS ADK シリーズ>

It depends on the amount of text of the data you want to print.
If it is a string like "www.microsoft,com" or "012345678" it will be a small QR printed.
When you extend the length of the data being sent to the printer, you will see the difference.

Related

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

OCR and character similarity

I am currently working on some kind of OCR (Optical Character Recognition) system. I have already written a script to extract each character from the text and clean (most of the) irregularities out of it. I also know the font. The images I have now for example are:
M (http://i.imgur.com/oRfSOsJ.png (font) and http://i.imgur.com/UDEJZyV.png (scanned))
K (http://i.imgur.com/PluXtDz.png (font) and http://i.imgur.com/TRuDXSx.png (scanned))
C (http://i.imgur.com/wggsX6M.png (font) and http://i.imgur.com/GF9vClh.png (scanned))
For all of these images I already have a sort of binary matrix (1 for black, 0 for white). I was now wondering if there was some kind of mathematical projection-like formula to see the similarity between these matrices. I do not want to rely on a library, because that was not the task given to me.
I know this question may seem a bit vague and there are similar questions, but I'm looking for the method, not for a package and so far I couldn't find any comments regarding the method. The reason this question being vague is that I really have no point to start. What I want to do is actually described here on wikipedia:
Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition".[9] This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly. (http://en.wikipedia.org/wiki/Optical_character_recognition#Character_recognition)
If anyone could help me out on this one, I would appreciate it very much.
for recognition or classification most OCR's use neural networks
These must be properly configured to desired task like number of layers internal interconnection architecture , and so on. Also problem with neural networks is that they must be properly trained which is pretty hard to do properly because you will need to know for that things like proper training dataset size (so it contains enough information and do not over-train it). If you do not have experience with neural networks do not go this way if you need to implement it yourself !!!
There are also other ways to compare patterns
vector approach
polygonize image (edges or border)
compare polygons similarity (surface area, perimeter, shape ,....)
pixel approach
You can compare images based on:
histogram
DFT/DCT spectral analysis
size
number of occupied pixels per each line
start position of occupied pixel in each line (from left)
end position of occupied pixel in each line (from right)
these 3 parameters can be done also for rows
points of interest list (points where is some change like intensity bump,edge,...)
You create feature list for each tested character and compare it to your font and then the closest match is your character. Also these feature list can be scaled to some fixed size (like 64x64) so the recognition became invariant on scaling.
Here is sample of features I use for OCR
In this case (the feature size is scaled to fit in NxN) so each character has 6 arrays by N numbers like:
int row_pixels[N]; // 1nd image
int lin_pixels[N]; // 2st image
int row_y0[N]; // 3th image green
int row_y1[N]; // 3th image red
int lin_x0[N]; // 4th image green
int lin_x1[N]; // 4th image red
Now: pre-compute all features for each character in your font and for each readed character. Find the most close match from font
min distance between all feature vectors/arrays
not exceeding some threshold difference
This is partially invariant on rotation and skew up to a point. I do OCR for filled characters so for outlined font it may have use some tweaking
[Notes]
For comparison you can use distance or correlation coefficient

Should barcode font sizes match?

I am trying to convert a string into Code39 barcode. To increase the reliability I am trying to increase the font size of barcode from 40 to 60. Would this cause any issue as the width and height of the bars will change compared to the previous version of font 40?
No, the scanner reads the ratio between the width of the symbols. As long as they both scale the same way, you're fine. I doubt that you'll see increased reliability. I hope you'll post results.

DirectShow: IVMRWindowlessControl::SetVideoPosition stride(?)

I have my own video source and using VMR7. When I use 24 color depth, my graph contains Color Space Converter filter which converts 24 bits to ARGB32. Everything works fine. When I use 32 bit color depth, my image looks desintegrated. In this case my source produces RGB32 images and passes them directly to VMR7 without color conversion. During window sizing I noticed that when destination height is changing the image becomes "integrated" (normal) in some specific value of destination height. I do not know where is the problem. Here are the example photos: http://talbot.szm.com/desintegrated.jpg and http://talbot.szm.com/integrated.jpg
Thank you for your help.
You need to check for a MediaType change in your FillBuffer method.
HRESULT hr = pSample->GetMediaType((AM_MEDIA_TYPE**)&pmt);
if (S_OK == hr)
{
SetMediaType(pmt);
DeleteMediaType(pmt);
}
Depending on your graphic you get different width for your buffer. This means, you connect with an image width of 1000 pixels but with the first sample you get a new width for your buffer. In my example it was 1024px.
Now you have the new image size in the BitmapInfoHeader.biWidth and got the old size in the VideoInfoHeader.rcSource. So one line of your image has a size of 1024 pixels and not 1000 pixels. If you don't remember this you can sometimes get pictures like you.

How to determine the number of charecter will fit to screen in Qt

How do I determine the number of characters in a particular font will fit to the screen?
Have a look at QFontMetrics. Using this, you can determine, among other things, the width of a particular string:
QFontMetrics metrics(myFont);
int width = metrics.width(myString);
Is this what you want?
Note: It is not possible to find the number of characters of a particular font that will fit on the screen since not all fonts are monospace. So the number of characters will depend on the actual characters.
you can also use QFontMetrics::elidedText passing available space (remember to reduce it with margins/paddings. Then call length on result string

Resources