My question is, simply, how many (non-zero) decimal places can I include in a value I use in a CSS stylesheet before the browser rounds the number when interpreting it?
NOTE: I am aware that any decimal pixels are rounded (differently by different browsers) because the screens cannot display sub-pixel units. What I am asking is before that rounding takes place, what number of decimal places will be retained to begin performing the final browser rendering calculations/roundings.
Be it truncation or rounding, in an ideal world, neither of these things should happen. The spec simply says that a numeric value may either consist of
one or more digits, or
any number of digits, followed by a period for the decimal point, followed by one or more digits.
The spec even accounts for the fact that the leading zero before the decimal point in a value that's less than 1 is not significant and can thus be omitted, e.g. opacity: .5. But there is quite simply no theoretical upper limit.
But, due to implementation limitations, browsers will often "round" values for the purposes of rendering. This is not something you can control other than by changing the precision of your values, and even so, this behavior can vary between browsers, for obvious reasons, and is therefore something you cannot rely on.
Related
I noticed that the C++ standard library has separate functions for round and lround rather than just having you use long(round(x)) for the latter.
Looking into the implementation in glibc, I find that indeed, for platforms using IEEE754 floating point, the version that returns an integer will directly manipulate the bits from within the floating point representation, and not do the rounding using floating point operations (e.g. adding ±0.5).
What is the benefit of having a distinct implementation when you want the result as an integer type? Is this supposed to be faster, or more accurate? If it is better to use integer math on the underlying representation, why not just always do it that way even if returning the result as a double?
One reason is that adding .5 is insufficient. Let’s say you add .5 and then truncate to an integer. (How? Is there an instruction for that? Or are you doing more work?) If x is ½−2−54 (the greatest representable value less than ½), adding .5 yields 1, because the mathematical sum, 1−2−54, is exactly halfway between the nearest two representable values, 1−2−53 and 1, and the common default rounding mode, round-to-nearest-ties-to-even, rounds that to 1. But the correct result for lround(x) is 0.
And, of course, lround is specified to round ties away from zero, regardless of the current rounding mode. You could set the rounding mode, do some arithmetic, and restore the rounding mode, but there are problems with this.
One is that changing the rounding mode is a typically a time-consuming operation. The rounding mode is a global state that affects most floating-point instructions. So the processor has to ensure all pending instructions complete with the prior mode, change the global state, and ensure all later instructions start after that change.
If you are lucky, you might have a processor with per-instruction rounding modes or something similar, and then you can use any rounding mode you like without time penalty. Hewlett Packard has some processors like that. However, “round away from zero” is an uncommon mode. Most processors have round-to-nearest-ties-to-even, round toward zero, round down (toward −∞), and round up (toward +∞), and round-to-odd is becoming popular for its value in avoiding double-rounding errors. But round away from zero is rare.
Another reason is that doing floating-point instructions alters the floating-point status flags and may generate traps, but it is desired that library routines behave as single operations. For example, if we add .5 and rounding occurs, the inexact flag will be raised, since the floating-point addition with .5 produced a result different from the mathematical sum. But to the user of lround, no inexact condition ever occurs; lround is defined to return a value rounded to an integer, and it always does so—within the long range, it never returns a computed result different from its ideal mathematical definition. So if lround(x) raised the inexact flag, that would be incorrect behavior. To avoid it, an implementation that used floating-point instructions would have to save the current floating-point flags, do its work, and restore the flags before returning.
I have searched the internet and it seems very hard to find this info.
When I do
div {
width: calc(1e-10 * 1e12px);
}
It sets the width to 100px. But when I do
div {
width: calc(1e-1000 * 1e1002px);
}
It fails. Clearly, 1e1002 is out of range.
What is the valid range of numbers in CSS? Does it depend on the unit? Is it browser specific?
It is up to each browser to pick limits for CSS real numbers. The spec supports a theoretically infinite range but relies on vendors to provide 'reasonable' support.
4.1. Range Restrictions and Range Definition Notation
Properties can restrict numeric values to some range. If the value is outside the allowed range, then unless otherwise specified, the declaration is invalid and must be ignored.
[...]
CSS theoretically supports infinite precision and infinite ranges for all value types; however in reality implementations have finite capacity. UAs should support reasonably useful ranges and precisions. Range extremes that are ideally unlimited are indicated using ∞ or −∞ as appropriate.
Soruce: https://www.w3.org/TR/css-values-3/#numeric-ranges
See also:
4.3. Real Numbers: the <number> type
Number values are denoted by <number>, and represent real numbers, possibly with a fractional component.
When written literally, a number is either an integer, or zero or more decimal digits followed by a dot (.) followed by one or more decimal digits and optionally an exponent composed of "e" or "E" and an integer. It corresponds to the <number-token> production in the CSS Syntax Module [CSS3SYN]. As with integers, the first character of a number may be immediately preceded by - or + to indicate the number’s sign.
Source: https://www.w3.org/TR/css-values-3/#numbers
Suppose we have a data set of numbers, with which we want to do some calculations using addition/subtraction/multiplication/division using a computer.
The coverage of the real numbers by the floating point representation varies a lot, depending on the number being represented:
In terms of absolute precision in the real->FP mapping the "holes" grow towards the bigger numbers, with a weird hole around 0, depending on the architecture. Due to this, the add/sub precision towards the bigger numbers will drop.
If we divide 2 consecutive numbers which are represented in our floating point representation, the result of the division will be bigger both while going to the bigger numbers and when going to smaller and smaller fractions.
So, my question is:
Is there a "sweet interval" for floats on an ordinary PC today, where the results for the arithmetics with the said operators (add/sub/mul/div) are just more precise?
If I have a data set of many-significant-digit numbers like "123123123123123", "134534513412351151", etc., with which I want to do some arithmetics, which floating point interval should it be converted to, to have the best precision for the result?
Since floating points are something like 1.xxx*10^yyy, 2.xxx*10^yyy, ..., 9.xxx*10^yyy, I would assume, converting my numbers into the [1, 9] interval would give the best results for the memory consumed, but I may be terribly wrong...
Suppose I use C, can such conversion even be made? Is there a best-practice to do that? Before an operation, C will convert the operands to the same format, so I guess I would have to use a string representation, inject a "." somewhere and parse that as float.
Please note:
This is a theoretical question, I don't have an actual data set on my hand that would decide what is best. On the same note, the mentioning of C was random, I am also interested in responses like "forget C, I would use this and this, BECAUSE it supports this and this".
Please spare me from answers like "this cannot be answered, because it depends on the actual operations, since the results may be in another magnitude range than the original data, etc., etc.". Let's suppose that the results of the calculation is more or less in the same interval, as the operands. Sure, when dividing the "more-or-less the same magnitude" operands, the result will be somewhere between 1-10, maybe 0.1-100, ... , but that is probably exactly the best interval they can be in.
Of course, if the answer includes some explanation, other than a brush-off, I will be happy to read it!
The absolute precision of floating-point numbers changes with the magnitude of the numbers because the exponent changes. The relative precision does not change, except for numbers near the bottom of the exponent range, where underflow occurs. If you multiply binary floating-point numbers by a power of two, perform arithmetic (suitably adjusted for the scaling), and reverse the scaling, the results will be identical to doing the arithmetic without scaling, barring effects from overflow and underflow. If your arithmetic does involve underflow or overflow, then scaling could help avoid that. For example, if your precision is suffering because your numbers are so small that some intermediate results are below the normal range of the floating-point format, then scaling by a power of two can avoid the loss of precision from underflow.
If you scale by something other than a power of two, the results can be different, due to changes in the significands. The effects will generally be tiny, and whether the results are better or worse will effectively be random chance, except in carefully engineered special situations.
I'm trying to reduce the number of decimals of a JS operation and use the result to set a transform: scale(x) inline CSS to an element.
I can't find any reference to know how many decimals are allowed by such CSS function.
I want to know how many numbers are allowed (and used by the browser in the transformation) after the comma. (0.0000000N)
The specification defines the value for scale as a <number>, which is defined as:
A number is either an <integer> or zero or more decimal digits followed by a dot (.) followed by one or more decimal digits and optionally an exponent composed of "e" or "E" and an integer. It corresponds to the <number-token> production in the CSS Syntax Module [CSS3SYN]. As with integers, the first character of a number may be immediately preceded by - or + to indicate the number’s sign.
Note the lack of how many "more" decimal digits are allowed. So any limit will be imposed by the browser, which will obviously vary by browser.
As it seems it could be useful for others and amending the accepted question by extending it I'll upgrade my comment to an answer:
In the last term, the number of decimals you'll get depends mainly on the browser implementation so, depending on your targets you'll need to do some more research. Here you have an excellent post and a good starting point:
Browser Rounding and Fractional Pixels
I am often programming mathematical algorithms that assume a nondimensional parameter spans the continuous space from 0..1 inclusive. These algorithms could in theory benefit from maximum resolution over the parameter space and I've considered that it would be of use to expend the full 32 or 64 bits of precision over the parameter space, with none wasted for exponents or signs.
I imagine the methods would look similar to an unsigned integer divided by its maximum representable value. Does this exist already and if so where, if not, is there a compelling reason why?
Can't you simply do all calculations in integers from 0 to MAX_INT, keeping all the same formulas/algorithms/whatever and then use "unsigned integer divided by its maximum representable value" conversion as very final step before printing result to user (or otherwise outputting it - for example in intermediate logs)?
The representation doesn't make sense without algorithms. E.g. you could represent it as fixed point (i.e. 0..MAX_INT / MAX_INT) or floated point a mantissa and exponent (e.g. to have an ability to store a values like 1e-1000) or something custom (e.g. to have an ability to represent a number 1/π precisely). After it you have define algos to manipulate the numbers in such representations. So, in other words there is no silver bullet to cover all cases. Only you know your task and could choose the best solution.
Moreover, the continuous space is impossible to represent using computes, because the space has infinite number of elements, so it cannot be algorithmized.