What does CSS measurement unit 'em' actually stand for? [closed] - css

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
As the title says, I want to know what exactly the CSS unit 'em' stands for.
As in 'cm' stands for centimeter.
I am not asking what 'em' is or how it is used!
I couldn't find an answer on W3C.
The Wikipedia article was already saying "something":
The name of em is related to M. Originally the unit was derived from the width of the capital "M" in the given typeface.
My interpretation of the first sentence would be that 'em' is the pronunciation/phonetic for the letter 'M'.
But reading the second sentence, it seems that the 'e' in 'em' stands for something regarding the width of the letter 'M'.
So I'm still lost, what 'em' really stands for!

That is a historical definition; in modern usage it simply refers to the size of the font, with the word "em" itself no longer having any practical or relevant meaning. As a matter of fact, the same Wikipedia article expands on this evolution in its usage and meaning in a later section:
One em was traditionally defined as the width of the capital "M" in the current typeface and point size, as the "M" was commonly cast the full-width of the square "blocks", or "em-quads" (also "mutton-quads"), which are used in printing presses. However, in modern typefaces, the character M is usually somewhat less than one em wide. Moreover, as the term has expanded to include a wider variety of languages and character sets, its meaning has evolved; this has allowed it to include those fonts, typefaces, and character sets which do not include a capital "M", such as Chinese and the Arabic alphabet. Thus, em generally means the point size of the font in question, which is the same as the height of the metal body a font was cast on.
Particularly in terms of CSS, an "em" doesn't necessarily refer to the width of the capital M for a particular font; it's just a relative quantity.
If you're asking about the etymology of the word "em", Wikipedia itself only contains a reference to the Adobe Glossary, which has little more to say about it:
A common unit of measurement in typography. Em is traditionally defined as the width of the uppercase M in the current face and point size. It is more properly defined as simply the current point size. For example, in 12-point type, em is a distance of 12 points.
It's not explicitly mentioned anywhere authoritative that it's a phonetic representation of the capital M, but considering its namesake definition I wouldn't rule out such a possibility.

In my opinion em stands for nothing but just pronunciation/phonetic for the letter 'M'. Similarly we have ex, one ex is the x-height of a font (x-height is usually about half the font-size).

Related

Why is CSS 'ex' unit defined using the 'first available font'?

Why is the em unit defined in terms of the font actually used to render the text, and the ex unit using the first available font?
To me, that looks like the font used to compute the height of ex can be different from the font actually used to render the text.
Quoting the specs:
The first available font, used for example in the definition of font-relative lengths such as ‘ex’ and ‘ch’ or in the definition of the ‘line-height’ property, is defined to be the first available font that would match the U+0020 (space) character given font families in the ‘font-family’ list (or a user agent's default font if none are available).
Why does the algorithm look for the space to compute the height of the letter 'x'? An explanation in layman terms would be very appreciated.
Why is the em unit defined in terms of the font actually used to render the text, and the ex unit using the first available font?
This shouldn’t be the case: both units are intended to be relative to the current font. The definition you provided mentions “font-relative lengths such as ‘ex’,” which also includes the ‘em’ unit.
That said, it seems like the authors agreed that the definition of “first available font” should be clarified: https://github.com/w3c/csswg-drafts/issues/4796
The section you quoted seem to imply that if the first font in the font-family list exists, but the U+0020 (space) character isn’t in the font, then the next font should be used. In practice, it sounds like browsers weren’t doing this anyway, and that probably wasn’t the original intent.
You can see the change that is being made to the definition here, as summarized in that issue: https://github.com/w3c/csswg-drafts/commit/7c2108c1764f328e0b60fffed47d3885a3dc7c11?diff=split
Why does the algorithm look for the space to compute the height of the letter 'x'? An explanation in layman terms would be very appreciated.
For the purpose of collecting and calculating font metrics, the U+0020 space is most likely the earliest and most common code point that could contain that information and would make sense to check. Many metrics are being calculated then, like the line height and em unit, not just the ex unit.
Beyond that, CSS ex unit section gives more detail on how that value is determined:
The x-height is so called because it is often equal to the height of the lowercase "x". However, an ex is defined even for fonts that do not contain an "x". The x-height of a font can be found in different ways. Some fonts contain reliable metrics for the x-height. If reliable font metrics are not available, UAs may determine the x-height from the height of a lowercase glyph. One possible heuristic is to look at how far the glyph for the lowercase "o" extends below the baseline, and subtract that value from the top of its bounding box. In the cases where it is impossible or impractical to determine the x-height, a value of 0.5em must be assumed.

max number of decimals allowed by CSS transform scale?

I'm trying to reduce the number of decimals of a JS operation and use the result to set a transform: scale(x) inline CSS to an element.
I can't find any reference to know how many decimals are allowed by such CSS function.
I want to know how many numbers are allowed (and used by the browser in the transformation) after the comma. (0.0000000N)
The specification defines the value for scale as a <number>, which is defined as:
A number is either an <integer> or zero or more decimal digits followed by a dot (.) followed by one or more decimal digits and optionally an exponent composed of "e" or "E" and an integer. It corresponds to the <number-token> production in the CSS Syntax Module [CSS3SYN]. As with integers, the first character of a number may be immediately preceded by - or + to indicate the number’s sign.
Note the lack of how many "more" decimal digits are allowed. So any limit will be imposed by the browser, which will obviously vary by browser.
As it seems it could be useful for others and amending the accepted question by extending it I'll upgrade my comment to an answer:
In the last term, the number of decimals you'll get depends mainly on the browser implementation so, depending on your targets you'll need to do some more research. Here you have an excellent post and a good starting point:
Browser Rounding and Fractional Pixels

How many significant figures can a number have in CSS?

My question is, simply, how many (non-zero) decimal places can I include in a value I use in a CSS stylesheet before the browser rounds the number when interpreting it?
NOTE: I am aware that any decimal pixels are rounded (differently by different browsers) because the screens cannot display sub-pixel units. What I am asking is before that rounding takes place, what number of decimal places will be retained to begin performing the final browser rendering calculations/roundings.
Be it truncation or rounding, in an ideal world, neither of these things should happen. The spec simply says that a numeric value may either consist of
one or more digits, or
any number of digits, followed by a period for the decimal point, followed by one or more digits.
The spec even accounts for the fact that the leading zero before the decimal point in a value that's less than 1 is not significant and can thus be omitted, e.g. opacity: .5. But there is quite simply no theoretical upper limit.
But, due to implementation limitations, browsers will often "round" values for the purposes of rendering. This is not something you can control other than by changing the precision of your values, and even so, this behavior can vary between browsers, for obvious reasons, and is therefore something you cannot rely on.

What is the "Law of the Eight"?

While studying this document on the Evolution of JPEG, i came across "The law of the eight" in Section 7.3 of the above document.
Despite the introduction of other block sizes from 1 to 16 with the SmartScale extension, beyond the fixed size 8 in the original JPEG standard, the fact remains that the block size of 8 will still be the default value, and that all other-size DCTs are scaled in reference to the standard 8x8 DCT.
The “Law of the Eight” explains, why the size 8 is the right default and reference value for the DCT size.
My question is
What exactly is this "law of the eight" ?
Historically, was a study performed that evaluated numerous images from a sample to arrive at the conclusion that 8x8 image block contains enough redundant data to support compression techniques using DCT? With very large image sizes like 8M(4Kx4K) fast becoming the norm in most digital images/videos, is this assumption still valid?
Another historic reason to limit the macro-block to 8x8 would have been the computationally prohibitive image-data size for larger macro-blocks. With modern super-scalar architectures (eg. CUDA) that restriction no longer applies.
Earlier similar questions exist - 1, 2 and 3. But none of them bother about any details/links/references to this mysterious fundamental "law of the eight".
1. References/excerpts/details of the original study will be highly appreciated as i would like to repeat it with a modern data-set with very large sized images to test the validity of 8x8 macro blocks being optimal.
2. In case a similar study has been recently carried-out, references to it are welcome too.
3. I do understand that SmartScale is controversial. Without any clear potential benefits 1, at best it is comparable with other backward-compliant extensions of the jpeg standard 2. My goal is to understand whether the original reasons behind choosing 8x8 as the DCT block-size (in jpeg image compression standard) are still relevant, hence i need to know what the law of the eight is.
My understanding is, the Law of the Eight is simply a humorous reference to the fact that the Baseline JPEG algorithm prescribed 8x8 as its only block size.
P.S. In other words, "the Law of the Eight" is a way to explain why "all other-size DCTs are scaled in reference to 8x8 DCT" by bringing in the historical perspective -- the lack of support for any other size in the original standard and its defacto implementations.
The next question to ask: why Eight? (Note that despite being a valid question, this is not the subject of the present discussion, which would still be relevant even if another value was picked historically, e.g. "Law of the Ten" or "Law of the Thirty Two".) The answer to that one is: because computational complexity of the problem grows as O(N^2) (unless FCT-class algorithms are employed, which grow slower as O(N log N) but are harder to implement on primitive hardware of embedded platforms, hence limited applicability), so larger block sizes quickly become impractical. Which is why 8x8 was chosen, as small enough to be practical on wide range of platforms but large enough to allow for not-too-coarse control of quantization levels for different frequencies.
Since the standard has clearly scratched an itch, a whole ecosphere soon grew around it, including implementations optimized for 8x8 as their sole supported block size. Once the ecosphere was in place, it became impossible to change the block size without breaking existing implementations. As that was highly undesirable, any tweaks to DCT/quantization parameters had to remain compatible with 8x8-only decoders. I believe this consideration must be what's referred to as the "Law of the Eight".
While not being an expert, I don't see how larger block sizes can help. First, dynamic range of values in one block will increase on average, requiring more bits to represent them. Second, relative quantization of frequencies ranging from "all" (represented by the block) to "pixel" has to stay the same (it is dictated by human perception bias after all), the quantization will get a bit smoother, that's all, and for the same compression level the potential quality increase will likely be unnoticeable.

Scrabble - the best move [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have made algorithm for scrabble . It uses strategy of highest score . But I do not think that is is the best way to play the game.
My question is: is there any advanced math for scrabble that suggests not the highest score word but an other one that will increase the probability to win?
Or in other words some different strategy then highest score?
I have my own ideas how it can be. For example, suppose there are two words that have almost the same score (s1 > s2) but lets say the second word does not open new way to 3W or 2W and even its score is less then the score of first one, than it is good to use the second word and not the first one.
From my experience with scrabble, you are correct in that you don't necessarily want to always suggest the highest scoring word. Rather, you want to suggest the best word. I don't think this requires a lot of advanced math to pull off.
Here are some suggestions:
In your current algorithm, rank all your letters, particularly consonants, by ease of use. For example, the letter "S" would have the highest ease of use because it is the most flexible. That is, when you play a given word and leave out the letter "S", you are essentially opening up the possibility for better word choices with the new letters than come into play on your next turn.
Balance out vowel and consonant usage in your words. As a regular scrabble player, I don't always play the best scoring word if the best scoring word doesn't use enough vowels. For example, if I use 4 letters than contain no vowels and I have 3 vowels left in my array of letters, chances are I am going to draw at least two vowels on my next turn, which would leave me with 5 vowels and 2 consonants, which chances are doesn't open up a lot of opportunity for high scoring words. It is almost always better to use more vowels than consonants in your words, especially the letter I. Your algorithm should reflect some of this when selecting the best word.
I hope this gives you a good start. Once your algorithm is able to select the best scoring word, you can fine tune it with these suggestions in order to be an overall better scorer in your scrabble games. (I am assuming this is some sort of AI you are creating)
My question is: is there any advanced math for scrabble that suggests not the highest score word but an other one that will increase the probability to win?
As ROFLwTIME mentioned, you need to also account for the letters that you haven't played.
In doing that accounting, you need to account for how letters interact with one another. For example, suppose you have a Q, a U, and five other letters. Suppose the best you can score playing both the Q and the U is 30 points, but you can score more by playing the U but leaving the Q unplayed. Unless that "more" is much more than 30, either play the word with the Q or find a third word that leaves both the Q and the U unplayed.
You also need to account for the opportunities the word you play creates for your opponents. A typical game theory strategy is to maximize your score while minimizing your opponents score, maximin for short. Playing a 20 point word that allows your opponent to play a 50 point word is not a good idea.

Resources