I am going to print from the browser to a receipt printer. I want to support 58mm --> full size paper using responsive design. I used: http://www.unitconversion.org/typography/pixels-x-to-millimeters-conversion.html and it says 58 mm is approx. 219 pixels.
Is this an accurate way to measure pixels in the browser?
That converter is misleading - it can produce only approximations (or at the very best, results that work for your specific display), but never generally accurate results.
The number of pixels per mm (or any other physical unit) will vary from display to display, as different displays will have a different number of differently sized pixels. See "Pixel Density" in Wikipedia.
You can specify physical measures in CSS and when printing they should come out OK if the browser and the printer driver are handling things right:
.mysheet { width: 19.2cm; height: 8cm; }
Some Browser/OS/display combinations (I don't know which protocols do this) apparently also can interpret physical measures and render them in their correct size on screen.
Related
I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image
Problems:
I am printing custom size scenes, the printing must work on a variety of printers, standard or with custom sizes, or roll-fed (particularly this). Some of the custom printers are edge-to-edge.
The user-defined canvas may or may not match the printer paper size.... If the image is smaller than the paper, some printers will center it, others (like HP) print it on top left.
On some printers I can set "Custom" paper, others do not support it.
If the printer has minimum margins, some printers seem to clip, others to render from the top-left margin, and ma or may not clip on image size.
I would like to handle the clipping and margins myself and send to the printer the image as it should fit on "page".
m_printer->setPaperSize(QPrinter::Custom); //gives
QPrinter::setPaperSize: Illegal paper size 30
Assuming that the following works,
m_printer->setPaperSize(canvasRectangle.size(), QPrinter::Point);
getting the marked paper size in cups still returns the default marked in the ppd (Letter, w4h4, ...) (though I can print or cut that size)
What I need:
I need to find, for the (selected/custom) paper/page, the minimum margins.
I thought I could get the margins by just asking for them
qreal left, right, top, bottom;
m_printer->getPageMargins(&left, &top, &right, &bottom, QPrinter::Point);
qDebug() << left << right << top << bottom;
But regardless of printer (I tried HP, PDF and a custom edge-to-edge printer) I got 10 10 10 10.
I thought I would set them to 0 first... I got back 0. (but printing still used some tiny margins, which it either clipped or moved over depending on device, except for the edge-to-edge printers - so while I got no error setting margins to 0 when 0 is impossible, QPrinter told me it set margin to 0 successfully.)
Right now I am trying to make this work in Linux, using cups (and Qt 4.8) - I looked in the ppd of the various printers, but what I see, as ImageableArea for different provided sizes, is that each size has different margins - so that defies the minimum margins idea.
I imagined that the minimum margins (for maximum printable area) should not depend on the paper chosen, but on the printer geometry.
I considered getting the cups ppd option values for ImageableArea - but getting it for the "default" paper size doesn't seem useful if I am not using that paper size - and for custom paper size, there is a range so I don't know what I can get from it.
Also - I can't even seem able to get the cups option for ImageableArea:
const cups_dest_t* pDest = &m_pPrinters[iPrinterNumber];
for (int i = 0; i < pDest->num_options; ++i)
if (strcmp(pDest->options[i].name, pName) == 0)
// I can show options like "PageSize", "PageRegion" but not "ImageableArea"
I am struggling to understand this...
How can I find, using either Qt or cups, the minimum possible printer margins ?
I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).
There is a similar question here whose answer in essence says:
The height - specifically from the top of the ascenders (e.g., 'h' or 'l' (el)) to the bottom of the descenders (e.g., 'g' or 'y')
This has also been my experiance. I.e. in 14px Arial the height of the letter K (the baseline height) is about 10px.
The specs say nothing about the calculated font-size so I'm guessing this is browser-specific, but I could not find any reference to it.
(Other questions here and here ask roughly the same thing but sadly no answer gives a satisfying explanation..)
Is there any documentation on why the font-size seems to be the size "from ascenders to descendes"?
Some background on the subject
Back when letters were created on metal, the em referred to the size of each block that the letter would be engraved on, and that size was determined by the capital M because it usually takes up the most space.
Now a days, font developers create their fonts on a computer without the limitations of a physical piece of metal so while the em still exists, its nothing more than an imaginary boundary in the software; thus prone to being manipulated or disregarded altogether.
Standards
In an OpenType font, the em size is supposed to be set to 1000 units. And in TrueType fonts, the em size is usually either 1024 or 2048.
The most accurate way to define a font style is to use EM that way when you define a font-size for use, the dimension does not refer to the pixel height of the font, but to the fonts x height which is determined by the distance between the baseline and the mean line of the font.
For the record 1 PT is about 0.35136mm. And PX is 1 "dot" on your screen which is defined by the dots per square inch or resolution of your screen and thus different from screen to screen and is the worst way to define a font size.
Unit conversion
This is a pretty good read if you enjoy literature that makes your eyes bleed like me.. International unification of typopgrahic measurements
1 point (Truchet) | 0.188 mm
1 point (Didot) | 0.376 mm or 1/72 of a French royal inch
1 point (ATA) | 0.3514598 mm or 0.013837 inch
1 point (TeX) | 0.3514598035 mm or 1/72.27 inch
1 point (Postscript) | 0.3527777778 mm or 1/72 inch
1 point (l’Imprimerie nationale, IN) | 0.4 mm
1 pica (ATA) | 4.2175176 mm = 12 points (ATA)
1 pica (TeX) | 4.217517642 mm = 12 points (TeX)
1 pica (Postscript) | 4.233333333 mm = 12 points (Postscript)
1 cicero | 4.531 mm = 12 points (Didot)
Resolutions
µm : 10.0 20.0 21.2 40.0 42.3 80.0 84.7 100.0 250.0 254.0
dpi : 2540 1270 1200 635 600 317 300 254 102 100
Standards are only worth so much..
The actual size of one fonts glyphs vs another font are always going vary dependent on:
how the developer designed the font glyphs when creating the font,
and how the browser renders those characters. No two browsers are going to be exactly the same.
the resolution and ppi of the screen viewing the font.
Example
As an example of how common it is for font developers to manipulate the geometry.. Back when Apple created the Zapfino script font, they sized it relative to the largest capitals in the font as would be expected but then they decided that the lowercase letters looked too small so a few years later they revised it so that any given point size was about 4x larger than other fonts.
If you don't have a headache, read some more..
There's some good information on Wikipedia about modern typography and origins; and other relevant subjects.
Point (typography)
Pixels per inch
Font metrics
Typographic units
And some first-hand experience
If you want to get more first-hand understanding you can download the free font development tool FontForge which is comparable to the non-free FontLab Studio (either of these two being the popular choice among font developers in my experience). Both also have active communities so you can find plenty of support when learning how to use them.
Fontlab Studio
Fontlab Fontographer
FontForge
Fontlab Wikibook
FontForge Documentation
The answer regarding typesetting is excellent, but be aware css diverges from traditional typography in many cases.
In css the font-size determines the height of the "em-box". The em-box is a bounding box which can contain all letters of the font including ascenders and descenders. Informally you can think of font-size as the "j"-height, since a lower case j has both ascender and descender and therefore (in most fonts) uses the full em-box height.
This means most letters (like a capital M) will have space above and below in the em-box. The relative amount of space above and below a capital letter will vary between fonts, since some fonts have relatively larger or smaller ascenders and descenders. This is part of what stylistically sets fonts apart for each others.
You ask why font-size is including ascenders and descenders, ie. why it correspond to the height of the em-box, even though the height of most letters will be less than this. Well, since most texts do includes letters with ascenders and descenders, the em-box height indicates how much vertical space the text require (at minimum), which is quite useful!
An caveat: Some glyphs may even extend beyond the em-box in some fonts. For example the letter "Å" often extend above the em-box. This is a stylistic choice by the designer of the font.
I've experimented to pin down exactly what the font-size corresponds to in terms of font-metrics (as shown in the diagram in davidcondrey's answer).
Testing on Safari, Chrome and Firefox on macOS, setting font-size to 100px seems to set the apparent size of difference between the ascender line and descender line to 100px.
Unfortunately, there's multiple meanings for the ascender and descender when talking about different font formats, so to disambiguate in the case of OpenType, we're talking about the 'Typo Ascender' and the 'Typo Descender' from the OS/2 table.
OpenType CSS font-height diagram
An interactive illustration is available on the opentype.js glyph inspector https://opentype.js.org/glyph-inspector.html
When positioning the character (again in terms of OpenType metrics), browsers seem to consider y = 0 to be the Ascender, (which is not the same as the 'ascender line' in davidcondrey's diagram, but instead it's the ascender in the hhea table in OpenType). However, if CSS line-height is not set to normal, the position is offset by some amount but I think the rules here might be a bit more complex.
I expect there's more that factors in and it may be different between operating systems and browsers, however this is at least a good approximation.
After searching for a satisfying answer to a similar question, I found this graphic to be a lot of help...
http://static.splashnology.com/articles/Choosing-the-Best-Units/font-units-large.png
I am trying to convert a string into Code39 barcode. To increase the reliability I am trying to increase the font size of barcode from 40 to 60. Would this cause any issue as the width and height of the bars will change compared to the previous version of font 40?
No, the scanner reads the ratio between the width of the symbols. As long as they both scale the same way, you're fine. I doubt that you'll see increased reliability. I hope you'll post results.