I've been looking at this page, as well as this code example and I've noticed that the x_advance, y_advance, x_offset and y_offset fields in hb_glyph_position_t are of the type hb_position_t, which is an alias to int32_t. I haven't found any documentation about which units are used for these fields. The examples above suggest that they're 64ths of something, but that's all I can infer.
Does anyone else know the exact unit implied by hb_position_t?
It is in input font size units (say pixels).
The idea there is you multiply the input font size by 64 then you divide the position by 64 after the shaping so you will be in control of how much sub pixel precision you need.
Related
Split off from: https://stackoverflow.com/questions/31076846/is-it-possible-to-use-javascript-to-draw-an-svg-that-is-precise-to-1-000-000-000
The SVG spec states that SVGs use double-precision floats for all values.
Through testing, it's easy to verify this.
Affinity designer is a vector graphics program that allows zooms up to 1,000,000,000%, and it too uses double-precision floats to do all calculations.
I would like to know from someone who deeply understands double-precision floats: is it possible create an SVG that is visually correct at 1,000,000,000% zoom?
Honestly, I'm struggling with getting a grasp on the math of this:
9007199254740992 (The max value of a double-float according to https://stackoverflow.com/a/1848953/2328064 ) is larger than 1,000,000,000 so it seems to be reasonable that if something is 2 or even 2000 wide, that it would still be small when starting at 9007199254740992 and zooming 1,000,000,000%.
Hypothetical examples as ways to approach the question:
If we created an SVG of a 2D slice of the entire visible universe how far could we zoom in before floating point rounding started shifting things by 1 pixel?
If we start with an SVG that is 1024x1024, can we create a 'microscopic' grid that is both visible and visually correct at 1,000,000,000% zoom? (Like, say, we can see 20+ equidistant squares)
Edit:
Based on everything so far, the definitive answer is yes (with some important and interesting caveats for actually viewing this SVG).
In order to get the most precision at high zoom, start at the centre.
The SVG spec is not designed for this level of precision. This is especially true of the spec for SVG viewers.
(Not mentioned below) Typically curves are represented in software as Bézier curves, and standard Bézier curve implementations do not draw mathematically perfect circles.
Of course it is. Floating point math deals with relative, not absolute, precision. If you created a regular polygon at the origin, with radius 1e-7, then zoomed it to 1e7X size, you would expect to see a regular polygon with the same size and precision as an unzoomed circle with radius 1.
If you were to create the same regular polygon with vertices centered at (0, 1e9) or so, you'd expect to see some serious error. Doubles that large do not have enough absolute precision to accurately represent a shape that small.
However, there's another way to express "shapes far from the origin" in SVG, using a node transformation. If you were to specify the polygon relative to the origin, but give it a translation of (0,1e9), and zoomed to that point, you'd expect to see the same precision as the origin-centered polygon.
HOWEVER however, all this assumes that the SVG renderer in question is designed to do such things in the most precise possible manner (such as composing the shape and view transformations before applying them to the vertices, rather than applying one at a time). I'm not sure if any of the SVG renderers out there go to such lengths, given the unusualness (some might say, the wrong-headedness) of such a use case.
TL;DR: It is possible to create such an SVG file, but it's impossible to know if a renderer or other tools that merely follow the spec will render/process it correctly.
This is a case of the SVG standard being too vague. Since the renderers, canvses, etc. only have to follow the spec, the realistic answer is: you can create it, but it won't be usable for what you intend to use it for.
Most likely no.
The double has around 53 bits precision, so when doing a multiplication of 1e9 percent you could get a small amount of, but there are no guarantees. Maybe not enough to not stay in the correct pixel, but I guess you should create your own solution working and have a look at rasterisation, because that's what you seem to need to know more about.
I am trying to convert a string into Code39 barcode. To increase the reliability I am trying to increase the font size of barcode from 40 to 60. Would this cause any issue as the width and height of the bars will change compared to the previous version of font 40?
No, the scanner reads the ratio between the width of the symbols. As long as they both scale the same way, you're fine. I doubt that you'll see increased reliability. I hope you'll post results.
This is related to CSS color codes:
For hexcode we can represent 16,777,216 colors from #000000 to #FFFFFF
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
According to the specs:
Values outside the device gamut should be clipped or mapped into the gamut when the gamut is
known: the red, green, and blue values must be changed to fall within the range supported by
the device. Users agents may perform higher quality mapping of colors from one gamut to
another. For a typical CRT monitor, whose device gamut is the same as sRGB, the four rules
below are equivalent:
I'm doubtful if browsers actually can render all these values. (but if they do please tell me and ignore the rest of this post)
Im assuming there's some mapping from rgb(percentage) to hex. (but again Im not really sure how this works)
Ideally I'd like to find out the function rgb(percentage)->HEX
If I had to guess it would probably be one of these 3.
1) Round to the nearest HEX
2) CEIL to the nearest HEX
3) FLOOR to the nearest HEX
Problem is I need to be accurate on the mapping and I have no idea where to search.
There's no way my eyes can differentiate color at that level, but maybe there's some clever way to test each of these 3.
It might also be browser dependent. Can this be tested?
EDIT:
Firefox seems to round from empirical testing.
EDIT:
I'm looking through Firefox's source code right now,
nsColor.h
// A color is a 32 bit unsigned integer with four components: R, G, B
// and A.
typedef PRUint32 nscolor;
It seems Fiefox only has room for 255 values for each R,G and B. Hinting that rounding might be the answer, but maybe somethings being done with the alpha channel.
I think I found a solution for Firefox anyways, thought you might like a follow up:
Looking through the source code I found a file:
nsCSSParser.cpp
For each rgb percentages it does the following:
It takes the percentage component multiplies it by 255.0f
Stores it in a float
Passes it into a function NSToIntRound
The result of NSToIntRound is stored into an 8 bit integer datatype,
before it is combined with the other 2 components and an alpha
channel
Looking for more detail on NSToIntRound:
nsCoord.h
inline PRInt32 NSToIntRound(float aValue)
{
return NS_lroundf(aValue);
}
NSToIntRound is a wrapper function for NS_lroundf
nsMathUtils.h
inline NS_HIDDEN_(PRInt32) NS_lroundf(float x)
{
return x >= 0.0f ? PRInt32(x + 0.5f) : PRInt32(x - 0.5f);
}
This function is actually very clever, took me a while to decipher (I don't really have a good C++ background).
Assuming x is positive
It adds 0.5f to x and then casts to an integer
If the fractional part of x was less than 0.5, adding 0.5 won't change the integer and the fractional part is truncated,
Otherwise the integer value is bumped by 1 and the fractional part is truncated.
So each component's percentage is first multiplied by 255.0f
Then Rounded and cast into a 32bit Integer
And then Cast again into an 8 bit Integer
I agree with most of you that say this appears to be a browser dependent issue, so I will do some further research on other browsers.
Thanks a bunch!
According to W3C Specs, Valid RGB percentages fit in a range from (0.0% to 100.0%) essentially giving you 1,003,003,001 color combinations. (1001^3)
No, more than that, because the precision is not limited to one decimal place. For example, this is valid syntax:
rgb(23.456% 78.90123456% 0%)
The reason for this is that, while 8 bits per component is common (hence hex codes) newer hardware supports 10 or 12 bits per component; and wider gamut colorspaces need more bits to avoid banding.
This bit-depth agnosticism is also why newer CSS color specifications use a 0 to 1 float range.
Having said which, the CSS Object Model still requires color values to be serialized at 8 bits per component. This is going to change, but the higher-precision replacement is still being discussed in the CSS working group. So for now, browsers don't let you get more than 8 bits per component of precision.
If you are converting a float or percentage form to hex (or to 0 - 255 integer) the correct method is rounding. Floor or ceiling will not spec the values evenly at the top or bottom of the range.
In the following value below does it mean that the ouput device must have 256 colors exactly or can the ouput device have 256 or less or must the ouput device have 256 or more? Can someone explain this to me in simple terms?
(color-index: 256)
From the W3C docs:
The ‘color-index’ media feature describes the number of entries in the color lookup table of the output device. If the device does not use a color lookup table, the value is zero.
From that description, and from the examples after it, it appears that the answer is that it is the exact number of colors in the index (should it exist).
You may specify a minimum with min-color-index.
(color-index: 256) matches any device with 256 colors exactly.
(min-color-index: 256) matches any device with at least 256 colors
it's the bit quantity for the applying object color
From my understanding it is exact and you can you min and max prefixes to determine ranges.
W3C Docs
I'm using GDI+ in C++. (This issue might exist in C# too).
I notice that whenever I call Graphics::MeasureString() or Graphics::DrawString(), the string is padded with blank space on the left and right.
For example, if I am using a Courier font, (not italic!) and I measure "P" I get 90, but "PP" gives me 150. I would expect a monospace font to give exactly double the width for "PP".
My question is: is this intended or documented behaviour, and how do I disable this?
RectF Rect(0,0,32767,32767);
RectF Bounds1, Bounds2;
graphics->MeasureString(L"PP", 1, font, Rect, &Bounds1);
graphics->MeasureString(L"PP", 2, font, Rect, &Bounds2);
margin = Bounds1.Width * 2 - Bounds2.Width;
It's by design, that method doesn't use the actual glyphs to measure the width and so adds a little padding in the case of overhangs.
MSDN suggests using a different method if you need more accuracy:
To obtain metrics suitable for adjacent strings in layout (for example, when implementing formatted text), use the MeasureCharacterRanges method or one of the MeasureString methods that takes a StringFormat, and pass GenericTypographic. Also, ensure the TextRenderingHint for the Graphics is AntiAlias.
It's true that is by design, however the link on the accepted answer is actually not perfect. The issue is the use of floats in all those methods when what you really want to be using is pixels (ints).
The TextRenderer class is meant for this purpose and works with the true sizes. See this link from msdn for a walkthrough of using this.
Append StringFormat.GenericTypographic will fix your issue:
graphics->MeasureString(L"PP", 1, font, width, StringFormat.GenericTypographic);
Apply the same attribute to DrawString.
Sounds like it might also be connecting to hinting, based on this kb article, Why text appears different when drawn with GDIPlus versus GDI
TextRenderer was great for getting the size of the font. But in the drawing loop, using TextRenderer.DrawText was excruciatingly slow compared to graphics.DrawString().
Since the width of a string is the problem, your much better off using a combination of TextRenderer.MeasureText and graphics.DrawString..